Response
stringlengths
8
2k
Instruction
stringlengths
18
2k
Prompt
stringlengths
14
160
1 Most old tape backups are: tar cpio dump/restore If you have some of the data written on disk, you could determine the format using the file command. $ file {data} {data}: new-fs dump file (big endian), Previous dump Mon Oct 28 17:57:36 2019, This dump Wed Dec 31 16:00:00 1969, Volume 1, Level zero, type: tape header, Label none, Filesystem /, Device /dev/dsk/c1t1d0s0, Host myhostname, Flags 1 $ file {data} {data}: USTAR tar archive If I had to venture a guess I would say the data is dump format. Remember also that a tape may have multiple data streams separated by EOF markers. You may need to use the mt command to get to the data. $ mt fsf 1 Will move past the next EOF marker. Share Follow answered Apr 2, 2021 at 15:40 PaulBPaulB 35722 silver badges22 bronze badges 1 The tapes have a single backup file. The File command gives me "new-fs dump file", so it seems to be conclusive, thanks for the tip. – AlMacOwl Apr 6, 2021 at 0:43 Add a comment  | 
I have some old Solaris backups (to tape), I think v4, that I want to restore. I'm not sure what the format is. The binary has 1024 blocks, each block starts with 00 00 00 01,02,03,04,05 or 06 and then a 4 byte epoch date. I've figured out the headers: 00 00 00 01 - Backup Header 00 00 00 02 - Regular file content 00 00 00 04 - File continuation 00 00 00 05 - End of backup 00 00 00 03 & 06 - Some type of mask with lots of FF FF FF ... Ideally I would love to get a specification for this format, but at least it would be good to know what it is.
Solaris backup old
1 The filter name has changed in the latest versions of the plugin, the new filter is ai1wm_exclude_themes_from_export Full code to try: // EXCLUDE NODE MODULES add_filter( 'ai1wm_exclude_themes_from_export', function ( $exclude_filters ) { $exclude_filters[] = 'theme-name/node_modules'; // insert your theme name return $exclude_filters; } ); Source: https://wordpress.org/support/topic/excluding-node_modules-via-filter-not-working-anymore/#post-14913666 Share Follow answered Sep 27, 2021 at 17:20 user3615851user3615851 98011 gold badge1515 silver badges4242 bronze badges Add a comment  | 
I want to exclude the folder /wp-content/uploads/download-manager-files from backup because it contains files with several gigabytes of data. For this I used the following hook in the functions.php of my theme: add_filter('ai1wm_exclude_content_from_export', function($exclude_filters) { $exclude_filters[] = 'uploads/download-manager-files'; return $exclude_filters; }); The backup runs successfully, but it includes the /wp-content/uploads/download-manager-files folder. How do I apply the hook correctly to make it work?
How can I exclude folders in All-in-One WP Migration?
1 Oauth playground is only intended to be used for testing. Tokens created on the playground will only work for about two weeks. You should implement your own authorization. Share Follow answered Mar 14, 2021 at 12:41 Linda Lawton - DaImToLinda Lawton - DaImTo 111k3434 gold badges194194 silver badges471471 bronze badges 2 how to do that, can you refer any documentation r blog that might be helpful for me – Moshiur Mar 14, 2021 at 13:47 developers.google.com/identity/protocols/oauth2. – Linda Lawton - DaImTo Mar 14, 2021 at 16:12 Add a comment  | 
I am using google drive api with laravel for a continuous backup, so I am using following packages "nao-pon/flysystem-google-drive": "~1.1", and "spatie/laravel-backup": "^6.14" I have set up google drive api v3 with refresh token and put it into .env FILESYSTEM_CLOUD=google GOOGLE_DRIVE_CLIENT_ID=****.apps.googleusercontent.com GOOGLE_DRIVE_CLIENT_SECRET=**** GOOGLE_DRIVE_REFRESH_TOKEN=**** GOOGLE_DRIVE_FOLDER_ID=**** so everything works properly, and I can use google drive as a disk to store the back up everyday through a cron job, the only problem is after a week the refresh token gets expired(I assume) and stops working with this error message, Failed to authenticate on SMTP server with username "****" using 2 possible authenticators. Authenticator LOGIN returned Expected response code 235 but got code "535", with message "535 Incorrect authentication data if I change the refresh token again from oathplayground and place it into .env it starts working again for a week. so how can I solve this problem thus I need not to generate the token every week.
Failed to authenticate on SMTP server with username "***" using 2 possible authenticators
1 You can use find with -ctime to search for .tar.gz files changed in the last 7 days and then loop on the results, ftping each. Using this logic with your existing solution: #!/bin/sh USERNAME="ftp user" PASSWORD="ftp password" SERVER="IP or domain" # local directory to pickup *.tar.gz file FILE="/path/" # remote server directory to upload backup BACKUPDIR="/pro/backup/sql" while read fil; do # login to remote server ftp -n -i $SERVER <<EOF user $USERNAME $PASSWORD cd $BACKUPDIR mput "$FILE/$fil" quit EOF done < "$(find $FILE -ctime -7 -name "*.tar.gz")" Share Follow edited Mar 1, 2021 at 13:28 Martin Prikryl 195k6161 gold badges513513 silver badges1k1k bronze badges answered Mar 1, 2021 at 13:25 Raman SailopalRaman Sailopal 12.6k22 gold badges1212 silver badges2020 bronze badges 0 Add a comment  | 
I have a VPS running on Centos 7, and created a cron job to dump my database (Sql 8.0) and to create a tar to backup my entire site's files and this goes on everyday I want to create another bash / cron job to connect to my backup server and upload those backup files stored on my VPS. The problem is, I can't get it to upload only the newest files, not the entire files as there will be 7 backups every week. I want it to only upload today's files, not all available files. Should I use rsync ? Here is my bash so far: #!/bin/sh USERNAME="ftp user" PASSWORD="ftp password" SERVER="IP or domain" # local directory to pickup *.tar.gz file FILE="/path/" # remote server directory to upload backup BACKUPDIR="/pro/backup/sql" # login to remote server ftp -n -i $SERVER <<EOF user $USERNAME $PASSWORD cd $BACKUPDIR mput $FILE/*.tar.gz quit EOF
Uploading current day's backup instead of all files via Cron jobs
1 Make sure "Only show files in backup" is set to "No" Src Share Follow answered Dec 10, 2021 at 10:54 Pedro LobitoPedro Lobito 96.3k3232 gold badges265265 silver badges274274 bronze badges Add a comment  | 
I am using the "Filesystem Backup" from Webmin, but this recovering system doesn't works on my server. Here my config : Debian 10 (Buster) Webmin version 1.970 And here, what i've done : Backup creation Manual modification of a file to create a difference between my backup version and the post-backup version. Restoring the backup : the action log tells me that the restoration is complete. But when I go to see my files, nothing has moved. I don't know where to go to see if there is any error, I don't know where it could come from ... any idea? Thank you in advance.
Webmin Filesystem backup | Restoration complete without errors, but doesn't works
1 Option A ( Online ): For databases with storage size < 500GB you can use mongodump/mongorestore or ops manager(if you have it). For bigger database sizes more effective is to do storage or lv snapshots from the backend file system , you can lock some members and do the snap and unlock it later, the snap takes few seconds, after you have the snapshots you can copy and keep safe somewhere. Option B ( offline): Offline backup -> You shutdown the instances and copy the data dir & configs to safe place. P.S. From replicaSet it is enought to keep single member snapshot. From sharded cluster , single copy from 1x data member per shard + 1x CSRS copy. Share Follow answered Jan 25, 2021 at 12:17 R2D2R2D2 10.1k22 gold badges1717 silver badges3333 bronze badges Add a comment  | 
I'm new to using Mongo DB and was wondering if anyone could explain to me how to make reliable and accurate backups of my mongo data.
Mongo DB Backup and Restore
1 In general, Git requires all references in a repository to point to valid objects and, unless you are using a shallow or partial clone, every object reachable from a reference must be present. A repository that doesn't meet these criteria is corrupt. Therefore, it's not possible to just delete the .git/objects directory and move on, since all of your references become invalid. Also, when Git performs a negotiation with the remote side as part of the transfer protocol, it will use the objects specified by those references as statements of objects it has, and will negotiate with the remote side (by walking the history) until it finds a common set of objects. Since Git can't walk the history when there are no objects, the entire protocol won't work. You can either re-clone the repository with a shallow or partial clone if you want to reduce the amount of data, or if you don't want to do that, you can minimize disk usage by packing the repository with git gc. Those are the only options that won't corrupt your repository. Share Follow answered Jan 16, 2021 at 0:01 bk2204bk2204 70.9k77 gold badges9494 silver badges108108 bronze badges Add a comment  | 
While running my scheduled backups I have to save many git projets (containing the .git folder) and I'm looking for a way to filter the backup operation in order to save only the reference of each project with the remote url of the repository without saving the entire .git folder which sometimes contains too many files that I dont need/want to backup (e.g. .git/logs and .git/objects); so that, when restoring the backups, I will be able to just push/pull without any problems and if I need to align all the repository to remote it will be enough to pull. I tried to filter out .git/logs and .git/objects but I get an error when executing for example git status: .git0. Any suggestion? Thanks
Backup git projects by keeping only a reference with the remote repository without saving the entire .git folder
You have to use withAccess parameter for set method Possible values you can look on officialKeychainSwift github here https://github.com/evgenyneu/keychain-swift/blob/master/Sources/KeychainSwiftAccessOptions.swift You need some access value with ThisDeviceOnly ending to it not be synced over devices.
I use KeychainSwift to save my data on keychain, my problem is, when the I backup and restore my app, from Device A to Device B, those data from Keychain was included on the transfer. The question is, how can I prevent it from happening and make my keychain stay only on Device A? This is my code on Saving data into the keychain import KeychainSwift class ExampleViewController: UIViewController { let keychain = KeychainSwift(keyPrefix: "some_key") override func viewDidLoad() { keychain.set("some_data", forKey: "thisDeviceOnly") } }
How to prevent keychain data to be included in backup and restore process of the devices when using KeychainSwift?
1 It should work unless, as shown in docker-gitlab issue 562, to move was done with a different ownership It should be okay to move the files from /data1/data to /data2/data, you should take a little care while copying the files to the new location. i.e. either of these should be fine cp -a /data1/data /data2/data rsync --progress -av /data1/data /data2/data Simply doing cp -r /data1/data /data2/data will not preserve the ownership of the files which will cause issues. Share Follow answered Dec 18, 2020 at 8:35 VonCVonC 1.3m539539 gold badges4.6k4.6k silver badges5.4k5.4k bronze badges Add a comment  | 
I am using GitLab via docker on an intranet disconnected from the internet. I run GitLab docker using docker-compose following yml file. web: image: 'gitlab/gitlab-ee:latest' restart: always hostname: 'myowngit.com' ports: - 8880:80 - 8443:443 volumes: - /srv/gitlab/config:/etc/gitlab - /srv/gitlab/logs:/var/log/gitlab - /srv/gitlab/data:/var/opt/gitlab Then free space of 'volumes' is not enough so i move this path to '/mnt/mydata'. And I modify docker-compose.yml file. ... ... ... volumes: - /mnt/mydata/gitlab/config:/etc/gitlab - /mnt/mydata/gitlab/logs:/var/log/gitlab - /mnt/mydata/gitlab/data:/var/opt/gitlab To start GitLab service run sudo docker-compose up -d. After running the GitLab service I try to explore the project repository but the repository is not found(HTTP response 404 or 503). What is the reason? How to move GitLab docker volume directory?
Gitlab docker backup and restore
To move data between platforms, use MLCP and ask it to make an archive. Please see the relevant docs at https://docs.marklogic.com/guide/mlcp/export#id_93332 To move configuration, you can use Configuration Manager (https://docs.marklogic.com/guide/admin/config_manager) but it’s deprecated because the best practice these days is to script the construction of all things, perhaps with ml-gradle (https://developer.marklogic.com/code/ml-gradle/), and to check those construction scripts into your source control and control configuration that way across your multiple environments.
I need to confirm the process to copy a database from a Linux ML 9.x server to a Windows ML 9.x server and wanted to make sure I understood properly. Apparently I cant use a backup of a ML database taken on a Linux to restore onto a Windows server. Here is what I think the high level process is and would welcome correction/assistance please : On Linux source server : (1) Export the database configuration on source server ( to file ) xquery version "1.0-ml"; import module namespace admin = "http://marklogic.com/xdmp/admin" at "/MarkLogic/admin.xqy"; let $config := admin:get-configuration() let $config := admin:database-copy($config,xdmp:database("<The_existing_db>"),"<The_new_database>") (2) Export all forest structures on source servers ( to file - where are these stored on disk? ) xquery version "1.0-ml"; import module namespace admin = "http://marklogic.com/xdmp/admin" at "/MarkLogic/admin.xqy"; let $config := admin:get-configuration() let $config := admin:forest-copy($config,xdmp:forest("<original_forest>"),"<forest_copy>",()) (3) Export source data using mlcp ( to file/s - or db? ) On Windows destination server : (4) Create new database from exported configuration files (5) Create forests from exported forest configuration files & attach to database (6) Import data using mlcp from exported files Have I missed anything / got it wrong? Thanks in advance.
How to copy MarkLogic database between platforms
You should look into mysqldump with the --tab option. It runs those INTO OUTFILE statements for you, dumping each table into a separate file. You don't want all the tables in one file, because it would make it very awkward to import later. Always be thinking about how you will restore a backup. I tell people, "you don't need a backup strategy, you need a restore strategy." Backing up is just a necessary step to restoring.
I need to know how can I export 10 data tables from one database into a csv format with cron job daily? I know this script: SELECT * FROM TABLE NAME INTO OUTFILE '/var/lib/mysql-files/BACKUP.csv' FIELDS TERMINATED BY ',' ENCLOSED BY '"' LINES TERMINATED BY '\n'; But how can I in the same line add the another 9 tables? Best Regards!
How to output MySQL data tables in CSV format?
1 You must use git clone, not git push. To edit Wikis locally, you must first clone the Wiki. Say, the GitHub username is test, and the repository name is test-repo. In order to do so, you must run the following command in Git bash: git clone https://github.com/test/test-repo.wiki.git About the issues, this may help you do various operations with issues locally. Hope this helps. 🙂 Share Follow answered Nov 3, 2020 at 20:41 Panquesito7Panquesito7 33611 silver badge77 bronze badges 3 I have md file and want to push them and as I said in question git push github.com/test/test-repo.wiki.git does not work and it does not push it to wiki – yasin lachini Nov 3, 2020 at 21:03 If I clone and then change and then push it works but when I use python-github-backup it does not create .git and when I create it with git init and ... and after that push it does not work – yasin lachini Nov 3, 2020 at 21:06 Maybe if you update the Wiki via the GitHub UI could work. – Panquesito7 Nov 3, 2020 at 21:16 Add a comment  | 
I want to move my GitHub repo from GitHub to GitLab. I want to move everything(issues, wiki and ...). I use python-github-backup and backup everything but I do not know how to push issues and wiki. I search and find out I can use git push URL/repo.wiki.git but it does not work in my scenario. How can I push issues and wiki?
Pushing issues and wiki to Github repo
For the first link you shared it is about the RDS snapshots (automatic snapshots) that are taken by the RDS service. These snapshots are only accessible within the single AWS region in which the snapshot was taken, however you can copy an automatic snapshot which will then make it a manual snapshot. From here you would then be able to copy the snapshot to another AWS region. The second link is actually referring to the AWS Backup service which is a service for centrally managing your backups. When this snapshot is taken the Backup service can actually handle the process of copying the snapshot to another region for you. Its worth noting that AWS Backup Service is a more recent addition to AWS, whereas the automated snapshots for RDS have existed since the service launched.
Can RDS automated backup be done in another region? I am seeing conflicting data from AWS: https://aws.amazon.com/blogs/database/implementing-a-disaster-recovery-strategy-with-amazon-rds/ says that automated backups are limited to single region. https://aws.amazon.com/about-aws/whats-new/2020/01/aws-backup-supports-cross-region-backup/ talks about AWS Backup and says that RDS has cross-region support. I am now confused. Can someone please help? My goal is disaster recovery, as highlighted in the first link.
AWS RDS automated backup
1 You could use a self-hosted caching NPM proxy like https://verdaccio.org/ . Install from it (forcefully) to make it cache your requested packages. npm install --force --registry https://your.npm.proxy Restore is a simple unforced npm install --registry https://your.npm.proxy Share Follow answered Jun 1, 2022 at 7:48 hansdanielshansdaniels 2333 bronze badges Add a comment  | 
We are becoming more and more dependent on public open-source repositories – And I was wondering if one of the packages or dependencies is down or no longer online – we'd be screwed if we do not have a plan B. Is there any project that allows you to scan all you Github project "package.json/yarn.lock" – and backup in your own VM all dependencies that you used for your project. That would be a failover registry in case something goes south. Any thoughts?
Backup npm packages registry
1 Put a delete lock on the Resource Group containing the backups, the name of the group should be something like: AzureBackupRG_eastus2_1 Then run your backup a couple of times and the backup should eventually fail because it cant remove the oldest backup. Another way to do it is to block all outbound traffic in your NSG. Share Follow answered Jul 17, 2020 at 7:23 Daniel BjörkDaniel Björk 2,49711 gold badge2020 silver badges2828 bronze badges Add a comment  | 
How can I simulate a backup failure on Azure VM? The Backup job executes fine every day. In order to test an alert execution I need to fail the VM backup. I tried to kill the Volume Shadow Copy process on the VM during a manually triggered backup, but it didn't cause the back to fail. Any ideas are greatly appreciated.
How to simulate Azure VM backup failure
1 You would need to re-install the Backup extension. Take a backup of whole registry then use following steps: Login to the affected machine. Open Registry Editor. Remove VMSnapshot registry keys at HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows Azure\HandlerState Remove or rename VMSnapshot Plugin Folders at C:\Packages\Plugins. Now, open command prompt as admin and run below commands to force the extension installation: REG ADD "HKLM\SOFTWARE\Microsoft\BcdrAgent" /v IsProviderInstalled /t REG_SZ /d False /f REG ADD "HKLM\SOFTWARE\Microsoft\BcdrAgentPersistentKeys" /v IsCommonProviderInstalled /t REG_SZ /d False /f Restart the Service "WindowsAzureGuestAgent". Trigger a manual backup. As a part of backup, extension will be re-installed automatically. Share Follow edited Jan 4, 2023 at 22:59 Stephen Ostermiller 24.7k1414 gold badges9393 silver badges111111 bronze badges answered Jan 1, 2023 at 4:29 Abdul Karim KhanAbdul Karim Khan 2144 bronze badges Add a comment  | 
I am receiving a backup failed error since 6/20/2020 within my Azure backup policy. I also noticed within the Properties section under Settings of the Virtual Machine instance the "Agent status" is in a "Not Ready" state. When I click the Backup section under Operations the Backup Status shows 2 entries. Backup Pre-Check Warning Last backup status Failed I can click the Warning option and the link takes me to a page that reads: Issue Description VM agent is unable to communicate with the Azure Backup Service. Suggested Action(s) Ensure that VM agent is latest and running. Allow access to IP 168.63.129.16 Per the Suggested Actions, I created the Outbound Rule to the IP Address 168.63.129.16 within the network interface of the Virtual Machine instance that is having the backup failed issue. That did not solve the problem. I also performed the below troubleshooting steps as well with no solution: I also Verified the Windows Azure Guest Agent service is running within services on the affected VM OS (ACMVI002). Stopped the VM instance from the Azure Portal. I then turned the VM instance back on. Issue persisted. Does anyone have a solution to this Backup Failed issue?
Azure Backup Failed "VM agent is unable to communicate with the Azure Backup Service"
1 If you are using Cloud Memorystore for Redis you can simply refer to the following documentation. Notice that you can simply use the following gcloud command: gcloud redis instances export gs://[BUCKET_NAME]/[FILE_NAME].rdb [INSTANCE_ID] --region=[REGION] --project=[PROJECT_ID] or use the Export operation from the Cloud Console. If you manage your own instance (e.g. you have the Redis instance hosted on a Compute Engine Instance) you could simply use the SAVE or BGSAVE (preferred) commands to take a snapshot of the instance and then upload the .rdb file to Google Cloud Storage using any of the available methods, from which I think the most convenient one would be gsutil (notice that it will require the following installation procedure) in a similar fashion to: gsutil cp path/to/your-file.rdb gs://[DESTINATION_BUCKET_NAME]/ Share Follow answered Jun 2, 2020 at 8:37 Daniel OcandoDaniel Ocando 3,65422 gold badges1212 silver badges1919 bronze badges Add a comment  | 
I want to backup the REDIS data on google storage bucket as flat file, is there any existing utility to do that? Although, I do not fully agree to idea of backing up of cache data on cloud. I was wondering if there is any existing utility rather than reinventing the wheel.
backup distributed cache data to cloud storage
I found a solution right afterwards. The filters have to be: + /etc + /home + /opt + /root + /srv + /var + /var/backup + /var/log + /var/mail + /var/www - /var/* - /*
I want to do a Linux system backup with rsync and my backup strategy exclude everything except what I explicitely want. I am not succeeding in finding the correct filter rules. I have the file sync_filter: + /etc + /home + /opt + /root + /srv + /var/backup + /var/log + /var/mail + /var/www - /* I think, my intention is clear. I want to include /etc and so on, I want to include /var/log and /var/mail but I do not want to include /var/cache. I am not succeeding on the /var part. Running rsync -av --filter "merge sync_filter" --delete root@remote:/ . will skip the /var part completely. From the documentation, I understand that, e.g. sync_filter0 is skipped, because sync_filter1 is excluded by sync_filter2 filter. I also know that I could include sync_filter3 and exclude, e.g. sync_filter4 but I'd like to succeed with my backup strategy exclude everything except what I explicitely want.
rsync filter excluding everything but given (sub) directories
1 Python backup/recovery steps on ML Sevrer: Backup - copy somewhere the folder C:\Program Files\Microsoft SQL Server\MSSQL14.MSSQLSERVER\PYTHON_SERVICES\ with workable Python version Recovery: 1) stop SQL server including Launchpad 2) drop\replace current Python folder C:\Program Files\Microsoft SQL Server\MSSQL14.MSSQLSERVER\PYTHON_SERVICES\ 3) copy PYTHON_SERVICES folder with workable Python version into C:\Program Files\Microsoft SQL Server\MSSQL14.MSSQLSERVER\ 4) start SQL server 5) start Launchpad Share Follow answered May 13, 2020 at 6:49 Alex IvanovAlex Ivanov 69711 gold badge88 silver badges1919 bronze badges Add a comment  | 
After installation of numpy-1.18.1-cp35-cp35m-win_amd64.whl package in Python 3.5 ( SQL Server Machine Learning Services for windows x64, SQL Management Studio v17.9.1, MSSQL Server 2017) I recieved the error: Unable to communicate with the runtime for 'Python' script. Please check the requirements of 'Python' runtime. If will not be able to fix the problem, I would prefer not to reinstall ML Server from scratch, but restore Python and its packages from backup. Is there a method of backup and restore Python and Packages? May be, it is sufficient to backup and restore some Python folders like: C:\Program Files\Microsoft SQL Server\MSSQL14.MSSQLSERVER\PYTHON_SERVICES\ Thank you very much.
Machine Learning Server (SQL): how to backup and restore Python and Packages
When you have a Docker container that is already configured to use a MySQL database inside the container, you can make a database backup with the mysqldump inside the container, like this: mysqldump database > database.sql (NOTE! You run this inside your container) You can then use the docker container cp command to copy files between the container and the local filesystem. For instance: docker container cp <containerId>:/file/path/within/container /host/path/target (NOTE! You run this in your local filesystem) So using this example, the commandline would become: docker container cp <containerId>:/full/path/to/database.sql . That copies the file database.sql to your current directory in your local filesystem.
SOLVED ! Good morning :) Did anyone make a db backup from docker to host machine? I have a MySQL database but I would like to clone it into my real machine because every time I run container it creates a new image, but I want to use this db every time (so I want db to persist in real machine). Thanks a lot !
Backup DB from docker to real machine
1 Your database system is I/O bound, as you can see from the %iowait value of 63.62. Increasing maintenance_work_mem might improve the situation a little, but essentially you need faster storage. Share Follow answered Mar 2, 2020 at 7:17 Laurenz AlbeLaurenz Albe 225k1818 gold badges234234 silver badges303303 bronze badges Add a comment  | 
I am trying to improve the time taken to restore a PostgreSQL database backup using pg_restore. The 29 GB gzip-compressed backup file is created from a 380 GB PostgreSQL database using pg_dump -Z0 -Fc piped into pigz. During pg_restore, the database size is increasing at a rate of 50 MB/minute estimated using the SELECT pg_size_pretty(pg_database_size()) query. At this rate, it will take approximately 130 hours to complete the restore which is a very long time. On further investigation, it appears that the CPU usage is low despite setting pg_restore to use 4 workers. The disk write speed and IOPS are also very low: Benchmarking the system's IO using fio has shown that it can do 300 MB/s writes and 2000 IOPS, so we are utilizing only about 20% of the potential IO capabilities. Is there any way to speed up the database restore? System Ubuntu 18.04.3 1 vCPU, 2 GB RAM, 4GB Swap 500 GB ZFS (2-way mirror array) PostgreSQL 11.6 TimescaleDB 1.60 Steps taken to perform restore: Decompress the .gz file to /var/lib/postgresql/backups/backup_2020-02-29 (~ 40mins) Modify postgresql.conf settings work_mem = 32MB shared_buffers = 1GB maintenance_work_mem = 1GB full_page_writes = off autovacuum = off wal_buffers = -1 pg_dump -Z0 -Fc0 Run the following commands inside pg_dump -Z0 -Fc1: pg_dump -Z0 -Fc2
Improve PostgreSQL pg_restore Performance from 130 hours
1 I have modified your code as you need, backup_file = input(r"Please enter the path for the file to backup: ") # D:\Python\BackupDB\test.db" dest_dir = input(r"Please enter the destination path: ") # D:\Python\BackupDB\ folder_name = input(r"Please name your backup folder: ") # BD_Backup old_file_name=backup_file.split("/")[-1] now = str(datetime.datetime.now())[:19] now = now.replace(":", "_") new_file_name = old_file_name.split(".")[0]+"_" + str(now) + ".db" final_destination = os.path.join(dest_dir, folder_name) if not os.path.exists(final_destination): os.mkdir(final_destination) new_file="/"+new_file_name shutil.copy(backup_file, final_destination) os.rename(final_destination+'/'+old_file_name,final_destination+new_file) I did like , after copy the file, i just rename it Share Follow answered Feb 23, 2020 at 9:15 Ananth.PAnanth.P 44522 silver badges88 bronze badges Add a comment  | 
so as you can see im trying to create a small backup script for my self, to select needed files and back them up. import shutil import datetime import os import time def backup(): # set the update interval while True: backup_interval = input("Please enter the backup interval in seconds: ") # 300 try: valid_time = int(backup_interval) // 60 print("Backup time set to:", valid_time, "minutes!") break except ValueError: print("This time is not valid, please enter a correct time in seconds: ") print(">>> 60 seconds = 1 minute, 3600 seconds = 60 minutes.") backup_file = input(r"Please enter the path for the file to backup: ") # D:\Python\BackupDB\test.db" dest_dir = input(r"Please enter the destination path: ") # D:\Python\BackupDB\ folder_name = input(r"Please name your backup folder: ") # BD_Backup now = str(datetime.datetime.now())[:19] now = now.replace(":", "_") # backup_file = backup_file.replace(backup_file, backup_file + str(now) + ".db") # thats why I got the FileNotFoundError final_destination = os.path.join(dest_dir, folder_name) if not os.path.exists(final_destination): os.makedirs(final_destination) print("hello world") shutil.copy(backup_file, final_destination) the first question is, how do i replace the name after i copied the file into the destination folder to get something like that test.db -> test_2020-02-23 08_36_22.db like here : source_dir = r"D:\Python\BackupDB\test.db" destination_dir = r"D:\Python\BackupDB\BD_Backup\test_" + str(now) + ".db" shutil.copy(source_dir, destination_dir) output : test_2020-02-23 08_36_22.db what im doing wrong here? and how to copy the file 5 times and after a while (backup_interval) delete the first one and move the last 4 up and create a new one so I have in total 5 copies of that file?
shutil.copy fails after os.makedirs
(TimescaleDB person here) There are two main approaches here: Use a backup system like WAL-E or pgBackRest to continuously replicate data to some other source (like S3). Integrate your use of TimescaleDB's drop_chunks with your data extraction process. The answer somewhat depends on how complex your data/database is. If you are looking to primary archive your data in a single hypertable, I would recommend the latter: Use show_chunks to determine which chunks are over a certain range, compute a select over their range and write the data wherever, and then execute drop_chunks over the same range.
I've got a remote ever growing TimescaleDb database. I would like to keep only the most recent entries in the that Db, backing up the rest of the data to local drive, to achieve constant Db size on the server. I thought of making full pg_dump backups before retaining and rebuilding the base locally from these backups. Also, I could use WAL-E to create a continuous copy, somehow ignoring the deletions on the remote database. What would be the most efficient way to achieve that?
How to drop the oldest entries from a remote TimescaleDb, maintaining the full local backup of the database?
Hello and welcome to Stack Overflow! Hope your webapp satisfies the prerequisites to be able to take a backup. The Backup and Restore feature requires the App Service plan to be in the Standard tier or Premium tier. Refer to Requirements and restrictions for the complete details. I got the same error initially when executing the cmdlet. However, passing the -debug switch with the cmdlet helped me understand the error better: { "ErrorEntity": { "ExtendedCode": "04205", "MessageTemplate": "The provided URI is not a SAS URL for a container (it needs to be https and it has to have 2 segments).", "Parameters": [], "Code": "BadRequest", "Message": "The provided URI is not a SAS URL for a container (it needs to be https and it has to have 2 segments)." } The -StorageAccountUrl parameter expects a SAS URL to be passed (refer this doc). The SAS URL for your Storage account is of the format: sasurl=https://$storagename.blob.core.windows.net/$container$sastoken To generate the SAS token, run the following: # Retrieve one of the Storage Account keys $key = (Get-AzStorageAccountKey -ResourceGroupName "<rg-name>" -AccountName "<storage-account-name>").Value[0] # Populate the Storage Account context $ctx = New-AzureStorageContext -StorageAccountName "<storage-account-name>" -StorageAccountKey $key # Generate the SAS token $sastoken = New-AzureStorageContainerSASToken -Name "<container-name>" -Permission rwdl -Context $ctx # Generate the SAS URL $sasurl = "https://<storage-account-name>.blob.core.windows.net/<container-name>$sastoken" # Create a backup New-AzureRmWebAppBackup -ResourceGroupName "<rg-name>" -Name "<app-name>" -StorageAccountUrl $sas -BackupName "<backup-name>" Successful execution of the above commands should let you create the backup. The response would look something like this: Hope this helps!
When I'm trying to execute the Azure App Service Backup over Azure Powershell with New-AzWebAppBackup cmdlet as described here: https://learn.microsoft.com/en-us/powershell/module/az.websites/New-AzWebAppBackup?view=azps-3.3.0 I'm getting the following error message: PS Azure:\> New-AzWebAppBackup -ResourceGroupName "MyResourceGroup" -Name "MyWebApp" -StorageAccountUrl "https://mystorageaccount.file.core.windows.net/" New-AzWebAppBackup : Operation returned an invalid status code 'BadRequest' At line:1 char:1 + New-AzWebAppBackup -ResourceGroupName "MyResourceGroup" -Name " ... + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : CloseError: (:) [New-AzWebAppBackup], DefaultErrorResponseException + FullyQualifiedErrorId : Microsoft.Azure.Commands.WebApps.Cmdlets.WebApps.NewAzureWebAppBackup Does anyone know what I'm missing? I'm thankful for any advice. Many Thanks.
New-AzWebAppBackup: Operation returned an invalid status code 'Bad Request'
The way rsync treats its source and destination paths is easy to get wrong. When you use the command: sync -a -v -n --progress /media/drive1 /media/drive2 ...it tries to sync the drive1 folder into drive2; that is, it creates and populates /media/drive2/drive1. When you add "/folder" to the source path, it works as expected because then it's trying to sync with /media/drive2/folder, which is what you want. Fortunately, the solution is easy: add "/" to the end of the source path, which tells it to sync the contents of drive1 into drive2, rather than the folder itself: sync -a -v -n --progress /media/drive1/ /media/drive2 BTW, I'd recommend adding --dry-run to make sure it's doing what you want before running it "for real". You'll probably also have to delete /media/drive2/drive1.
After making a full forensic copy of a harddrive using dd, I would like to keep up with changes between the original and backup harddisc, therefore I started using rsync. Whenever, I run sync -a -v -n --progress /media/drive1 /media/drive2 the command would start listing all files contained in drive1. However, only a couple of them has changed after I did DD. Trying that on a single folder sync -a -v -n --progress /media/drive1/folder /media/drive2 works fine and just displays the new files in that folder - those which are not contained in /media/drive2/folder. However, executing the command on the level of both volumes sync -a -v -n --progress /media/drive1 /media/drive2 does not account for the differentials, contrary to the documentation which is everywhere available, but takes all files which are already on both drives. What is my mistake?
RSYNC and folder hierachcy
1 If your process creates a commit every hour, you can reset the past 23 commits every day and create a single commit: git reset --soft HEAD~23 && \ git commit -m "Squashed previous 23 commits into one" && \ git push origin Share Follow edited Dec 23, 2019 at 10:53 answered Dec 23, 2019 at 10:36 SaravanaSaravana 39.2k1818 gold badges102102 silver badges110110 bronze badges 3 2 Isn't adding averything superfluous since --soft keeps changes in the index? – Romain Valeri Dec 23, 2019 at 10:49 @RomainValeri You are right. Changes are kept in index with --soft – Saravana Dec 23, 2019 at 10:53 This assumes that empty commits are made for hours that were no changes made – alamoot Dec 23, 2019 at 17:55 Add a comment  | 
I have written a simple script to git commit the changes every hour and push the changes But I want to keep only the last commit of each day and remove the previous ones due to the size of commits to save the space. In other words, I need to keep the last commit of 22nd December and remove the previous ones but keep the last commit of the previous day and it should not be deleted. This is the same for the next days.
How to keep the last commit of every day
1 As far as I understood correctly, go to database and: 1. Script Table As -> Create to... Now you got your table with all indexes and other stuff which in the table. 2. Create these table in your new database 3. Copy your data from backup tables to the new. You can do this with (tablock) for example. Before copying info drop constraints and indexes in new table and then copy your data. Or without dropping any objects update your index and stats with ALTER https://learn.microsoft.com/ru-ru/sql/t-sql/statements/alter-index-transact-sql?view=sql-server-2017 Share Follow answered Sep 26, 2019 at 10:27 AndriyAndriy 12311 silver badge99 bronze badges Add a comment  | 
Background:I needed to copy 2 tables from a backup to a production SQL Server database. Being new to SQL, I thought that I could just drop and insert into and it would work. So naive. Is there any simple way to copy everything about the good tables (I restored them into a separate backup) into the tables I created in the production DB? I know how to view constraints using "right click on table - tasks - create script - create script using CREATE", but I don't know what to do with this information.
How to copy/recreate indexes, constraints, triggers etc in SQL server?
Does your password contain any characters that are special to the shell? Like ; & ! or space, quotes, etc.? You might have to quote the password. But regardless, I recommend you do NOT put your passwords on the command-line. Anyone who can run ps on your server can therefore see your password in plain text. Instead, put the user and password in an options file. That is, save a small file with the following content: [mysqldump] user = backupuser password = ... Of course write your own password where I put "...". Change the permissions on the file so no one can read it except the user who runs your backups. Then reference the options file when you run mysqldump: mysqldump --defaults-file=myopts.cnf --all-databases ... That way the password is not visible in the server's process list. Also there is no worry about special characters in your password. P.S.: I'm not sure why you use sudo to run your backup. That should not be necessary.
I have created a mysql user using following command: CREATE USER 'backupuser'@'localhost' IDENTIFIED BY 'password'; I have given him permissions using following command: GRANT EVENT, LOCK TABLES, SELECT, SHOW DATABASES, RELOAD, REPLICATION CLIENT ON *.* TO 'backupuser'@'localhost' IDENTIFIED BY 'password'; But when I try to do database dump using following command it does not work: sudo mysqldump -ubackupuser -ppassword --all-databases > all_db_backup.sql It says: mysqldump: Got error: 1045: Access denied for user 'backupuser'@'localhost' (using password: YES) when trying to connect But if I try to connect without using password then it asks for password and then it works. I am creating a backup script so I have to pass the password in command. Can you please help how to make my backupuser to connect using mysql dump?
mysqldump: MySql backup user is not able to connect using password
You can use db-dumper package to backup your database. Installing: composer require spatie/db-dumper Usage: Spatie\DbDumper\Databases\MySql::create() ->setDbName($databaseName) ->setUserName($userName) ->setPassword($password) ->dumpToFile('dump.sql');
I'm going to backup my database .sql file from laravel. when I use artisan command from cmd, the backup file will generated with no issue. but when i do that from controller I got this error. How to fix this? mysqldump: Got error: 2004: "Can't create TCP/IP socket (10106 "Unknown error")"
Backup .sql from laravel
Here are the steps I took to correct this via http://portal.azure.com (I realize step 6 might be overkill as the Restore permission might be unnecessary here--but hey, this worked): Search for "Key vaults". Click on my key vault. Click "Access policies". Click "Backup Management Service". Click on the Key permissions dropdown and uncheck all checkboxes. Click on the Secret permissions dropdown and choose the Get, List, Backup, and Restore checkboxes. Click OK. Click Save back on the "Access policies" screen. The last step above is important as missing it will cause your changes NOT to be saved. I wrote these steps up and followed them as influenced by a statement I found at https://learn.microsoft.com/en-us/azure/backup/backup-azure-vms-encryption that says, "If your VM is encrypted using BEK only, remove the selection for Key permissions since you only need permissions for secrets." It seems I have BEK--at least that's what my Secret Types are. And indeed, the above worked. The backups began to work again as of July 11th!
I am met with the following error details when investigating why an Azure encrypted VM backup failed, but the link provided with the error (https://learn.microsoft.com/en-in/azure/backup/backup-azure-vms-encryption) doesn't resolve my question: exactly which permissions should I grant? All it says is that "The required permissions are prefilled for Key permissions and Secret permissions." Well, that's not a lot of help! I had those permissions already set as default I thought, because I do have lots of backups/snapshots; obviously backups have been working in the past. If I am missing some permission now, is it a Key permission, or a Secret permission? It's not clear! I do see I have the following set up right now: Key permissions: Key Management Operations Get (checked) List (checked) Update Create Import Delete Recover Backup (checked) Restore Cryptographic Operations: Decrypt Encrypt Unwrap Key Wrap Key Verify Sign Privileged Key Operations Purge Secret permissions: Secret Management Operations Get (checked) List (checked) Set Delete Recover Backup Restore Privileged Secret Operations Purge Certificate permissions: Certificate Management Operations Get List Update Create Import Delete Recover Backup Restore Manage Contacts Manage Certificate Authorities Get Certificate Authorities List Certificate Authorities Set Certificate Authorities Delete Certificate Authorities Privileged Certificate Operations Purge Below is the error I see for my backup: Error Code UserErrorKeyVaultPermissionsNotConfigured Error Message Azure Backup Service does not have sufficient permissions to Key Vault for Backup of Encrypted Virtual Machines. Recommended Action Please grant the required permissions to the Azure Backup Service. Refer https://azure.microsoft.com/en-in/documentation/articles/backup-azure-vms-encryption/ Related Links https://azure.microsoft.com/en-in/documentation/articles/backup-azure-vms-encryption
What are the required permissions for the Azure Backup Service?
Yes. SQL Server is backward compatible with any version that was supported at the time the version was released. For SQL Server 2016 that was SQL Server 2008-2014. A full list of the compatibility modes available can be found on the documentation here. Note that restoring a database on a newer version of SQL Server is a one way process. The database, from an older version, will be upgraded to work on the newer version and set to the appropriate compatibility level. You cannot restore a compatibility 100 database from SQL Server 2016 on a SQL Server 2008 instance (or anything else prior to 2016).
Can you restore SQL Server 2008 backup to SQL Server 2016 ? Thank you
Restore SQL Server 2008 backup to SQL Server 2016
For your issue, maybe it's a little mistake that you did. You just need to make a change in the backup block of the policy like this: backup { frequency = "Weekly" time = "18:30" weekdays = ["Friday"] } Then it will work fine. The screenshot of the test on my side below:
I am trying to create a weekly Azure VM protection policy in Terraform to run on Fridays at 6:30 pm with a retention of 1. TF throws format error related to 'schedule time, schedule days, retention time and retention days' error. I am not exactly sure which parameter has an incorrect value or format. resource "azurerm_recovery_services_vault" "backup_vault" { name = "${var.RG4VM}-recovery-vault" location = "${var.VMLocation}" resource_group_name = "${var.RG4VM}" sku = "Standard" depends_on = ["azurerm_resource_group.ResourceGroup"] } resource "azurerm_recovery_services_protection_policy_vm" "backup_policy" { name = "${var.RG4VM}-bkp-policy" resource_group_name = "${var.RG4VM}" recovery_vault_name = "${azurerm_recovery_services_vault.backup_vault.name}" depends_on = ["azurerm_recovery_services_vault.backup_vault"] backup { frequency = "Weekly" time = "18:30" } retention_weekly { count = 1 weekdays = ["Friday"] } } Expected: It should create the policy as per the config defined. Actual: azurerm_recovery_services_protection_policy_vm.backup_policy: 1 error(s) occurred: azurerm_recovery_services_protection_policy_vm.backup_policy: Error creating/updating Recovery Service Protection Policy "Terraform-Linux-Test-RG-bkp-policy" (Resource Group "Terraform-Linux-Test-RG"): backup.ProtectionPoliciesClient#CreateOrUpdate: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BMSUserErrorInvalidPolicyInput" Message="Input for create or update policy is not in proper format\r\nPlease check format of parameters like schedule time, schedule days, retention time and retention days " I'd appreciate any help in resolving this issue. Thanks Asghar
Terraform Azurerm Recovery Services Vault Backup Policy Format Error
1 If churn happens at a consistent rate daily, then having only daily is also fine If you see some unexpected churn take an adhoc backup and then give higher retention for that, If you observe that churn is possible mostly during month ends (say for finance applications), then having a monthly point might make sense Share Follow answered Apr 23, 2019 at 8:52 SumanthMarigowda-MSFTSumanthMarigowda-MSFT 2,16011 gold badge99 silver badges1515 bronze badges Add a comment  | 
I'm trying to wrap my head around Azure Backup retention points & want to know if the retention policy I'm choosing is optimal. With reference to the Azure pricing calculator, if I take 30 Daily RPs (Recovery Points) & 5 Yearly RPs, won't my VM data be adequately covered. screenshot from Azure pricing calculator about RPs What will I miss if I ignore Weekly & Monthly RPs? What scenarios would need Weekly & Monthly RPs?
Azure Backup Retention strategy
For starters, you shouldn't need to download you apps from Heroku: A Heroku app’s Git repository is intended for deployment purposes only. Cloning from this repository is not officially supported as a feature and should be attempted only as a last resort. Do not use this repository as your app’s canonical “origin” repository. Instead, use your own Git server or a version control service such as GitHub. Even if you're not using GitHub or similar each of your developers should have a copy of the application. There's no way to "develop live" on Heroku, so there must be at least one other copy of each application (unless they've been deleted). As far as databases go, if you're using Heroku Postgres you can download a copy of your database using heroku pg:backups:capture followed by heroku pg:backups:download, as documented. Other database addons have different needs. Your best bet to make sure you have everything is to spin your application up somewhere else and make sure it works. If you have a test suite, run it. Validating data will be tricky, but you can check some simple metrics like the number and names of tables, the number of records in each table, making sure you can log in with any accounts you have and that their application-facing data looks reasonable, etc.
I do not understand the structure on Heroku (I'm not a programmer). How can I download apps, databases before closing the account?
How to make sure I download all data on Heroku before closing the account?
1 If you use backup option in n++ preferences it will write to backup folder also not saved information. Default backup folder in Windows system is: C:\Users[user]\AppData\Roaming\Notepad++\backup Share Follow answered Feb 12, 2019 at 23:22 r_a_fr_a_f 63977 silver badges1818 bronze badges 3 my file isn't there. its location is in another Drive (drive E), can this be related? – Assaf Feb 13, 2019 at 14:48 do you have (had) active backup settings or not? – r_a_f Feb 13, 2019 at 16:18 I did. "backup every 7 seconds", but i don't find it – Assaf Feb 13, 2019 at 22:56 Add a comment  | 
I had a file in Notepad++, I made changes in notepad++ and forgot to save there, no big deal since npp auto-saves it when you close it. I accidentally changed the txt file via normal windows notepad and saved. (that caused the txt file to made changes on the original file before i made changes in npp). When I got back to npp I got the message that the file was modified somewhere else and asked me to reload it, again by accident I click 'reload'. Is there a way to retrieve the data that was lost?
Notepad++ reloaded a file that wasn't saved on notepadd++ and was modified caused deleted info
In Firebird 2.5 and higher, you can grant a user the RDB$ADMIN role in a database. This will give that user owner or SYSDBA equivalent rights in that database. GRANT [ROLE] RDB$ADMIN TO username See also RDB$ADMIN Role in the Firebird 2.5 language reference. A user with the RDB$ADMIN role can backup the database, provided the role is explicitly specified (option -role or -ro). If you think that granting administrator rights to a user might be too much, consider that a user who can backup and restore a database can essentially do anything to the database. For example change owner on restore, or restore on a different machine where they are SYSDBA make necessary changes like granting privileges, manipulate data, etc and then back that up and restore over the original. Firebird 4 will introduce an additional privilege USE_GBAK_UTILITY which can be use to specifically grant a user to only perform gbak operations. My previous point is an important caveat: a user that can backup and restore can do more than you think. In other words, allowing a user to backup a database without granting them some form of administrator control over the database is not possible.
Is it possible to create a Firebird 3 user who may do backups of a given database but cannot connect as sysdba and use things like tracing or looking into the environment of all sessions?
Firebird 3 backup by non SYSDBA and non DB owner?
It is not a bug, it is by designed, see this link: Supported scenarios. You can view reports across vaults and subscriptions, if the same storage account is configured for each of the vaults. The storage account selected must be in the same region as the Recovery Services vault.
I am setting up Azure Backup report, I have a Recovery Service Vault in Canada East, and a storage account located in Canada Central. when I try to pick up storage location for Backup report, there is none. It looks like you only can choose storage account at the same location with Recovery Service Vault. Is this a bug? Do I have to create another storage account in Canada East? Thank you for help
Azure storage account location with Recovery Services vault location
1 To the best of my knowledge there is no SQL Statement for MySQL database to backup anymore like the syntax you are using and which is possible for MS SQL Server via TSQL. For MySQL you have the following options to backup your database: Use mysqldump as a logical backup tool Use MySQL Enterprise Backup if you have MySQL Enterprise Edition. You can copy the MyISAM tables by just copying them. You could also write a SQL script and copy the content of the tables into txt files by using the command: SELECT * INTO OUTFILE 'fileName' FROM tableName You could of course also use replication or file system snapshots. If you want to backup the database using vb.net (as mentioned in comments), you can use the following MySqlBackup.NET, which is an alternative to mysqldump. The web site and documentation can be found here: https://github.com/MySqlBackupNET/MySqlBackup.Net For more info read the MySQL documentation: here. Share Follow edited Jan 7, 2019 at 12:33 answered Jan 3, 2019 at 11:33 Code PopeCode Pope 5,25688 gold badges2727 silver badges7474 bronze badges 1 thanks, #4 perfectly work but i need to backup database using query or using vb.net platform – B_CODE Jan 5, 2019 at 5:38 Add a comment  | 
BACKUP DATABASE dbwebsite TO DISK 'C:\Users\Paeng\Desktop\mydatabase.sql'; It always says error Query : BACKUP DATABASE dbwebsite TO DISK 'C:\Users\Paeng\Desktop\mydatabase.sql' Error Code : 1064 You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'DATABASE dbwebsite TO DISK 'C:\Users\Paeng\Desktop\mydatabase.sql'' at line 1
How to backup mysql database using query
1 The step i did to solve this problem is as below: 1. I go to services.msc and restarted Veeam Installer Service. 2. In Veeam software at section backup infrastructure i rescanned all servers and i also set my credentials again. at last i restarted server and the problem was solved. Share Follow answered Nov 20, 2018 at 10:32 t.khatibit.khatibi 11511 gold badge33 silver badges1414 bronze badges Add a comment  | 
i have a problem with veeam backup and replication 9. the problem is when i try to restore one of my virtual machines in a new location that it is on my back up server i get an error that [Backup Server Name] Failed to connect to installer service. can some one please help me on this case my windows server is 2012 and my virtual machine is a hyper-v virtual machine.
Failed to connect to installer service
declare @pvm varchar(30), @dumptorun varchar(300), @dbname varchar(70) select @pvm=(CONVERT(varchar(30), GETDATE(), 112)) select @dbname='master' select @dumptorun = "dump database " + @dbname + " to '/backup/DB/"+ @dbname+"_"+ @pvm + ".dmp'" select @dumptorun EXEC ( @dumptorun ) This works on Unix - you would need to adjust for windows as your original question has a windows directory but your answer seems to imply a unix directory type instead so would need the slashes and drive etc to be changed. The key is that you need quotes around the backup file name so I just changed your concatenated strings to double quotes so that its easier to add the single quote you need.
How to backup sybase database with date/time stamp using command line? Saw somebody posted this method: declare @pvm varchar(30), @dumptorun varchar(300), @dbname varchar(70) select @pvm=(CONVERT(varchar(30), GETDATE(), 112)) select @dbname='master' select @dumptorun = 'dump database '+@dbname+' to d:\temp\'+@dbname+'_'+@pvm+'.dmp' select @dumptorun EXEC ( @dumptorun ) Tried it and removed the go, but still stuck with some errors, it complains some syntax errors with "/" Anybody can help? Thanks.
How to backup sybase database with date/time stamp
1 They do not recommend skipping minor releases. So you should upgrade to 1.8 then 1.9 and so on. They support deprecated apis for one release, so for example if you have any deployments they are on extensions beta API which will not be supported by 1.11 release where they are on the apps API I don't think you're doing yourself any favors by trying to skip stuff. Either way it will be a long manual process Share Follow answered Oct 19, 2018 at 0:39 Lev KuznetsovLev Kuznetsov 3,59855 gold badges2121 silver badges3434 bronze badges Add a comment  | 
I manually install a kubernetes cluster of 3 nodes (1 master, 2 slave). Now, I want to perform a upgrade of the k8s version (say, from 1.7 to 1.11). As the gap is long, the preferred method would be to forcefully reinstalled all the required packages. Is it a better way to do this? If yes, could you please tell me how? Assume I do the upgrade by re-installing packages, I would want to manually backup everything (configuration, namespaces, and especially persistent volumes). From the kubernetes homepage, I found juju is recommended. but as I'm not running juju, what would be an alternative to do it manually? Thank you!
Kubernetes manual backup
This purely depends upon the product which you are going to develop/host within Azure. There are several factors like SLA(Service Level Agreements), Compliances, Audit/Policies etc., Let say if your product related to healthcare/financial domain. In such a case, you need to follow certain policies, compliances. Healthcare related products should be HIPAA compliances Financial/Cards products should be PCI DSS You can find all the list of compliance with Azure here The Answer may be not be needed. Azure has a lot of services for managing backups. If your project/product is compliance, Audit, and policies approved by Azure. Then you don't really need a separate backup from your side.
I want know is there any need to have backup plan? I am just curious about if azure have its policy that they can maintain backup of all application they have installed on there server so is there any need to take extra plan to have separate backup of our own code and database ? Please guide me ?
is there any need to have azure backup plan?
1 You set archive_command to a shell command that copies the WAL file to a safe archive location, so that burden is mostly on you. When PostgreSQL runs archive_command, it assumes that the WAL file is not corrupted. Only a PostgreSQL bug or a bug in the storage system could cause a corrupted WAL segment. There is no better protection against PostgreSQL bugs than always running the latest bugfix release, and you can invest in storage hardware that will at least detect failure. You can also write your archive_command with a certain amount of paranoia, e.g. by comparing the md5sum of the WAL segment and its archive copy. Another idea is to write two copies of the WAL file to different storage systems. Share Follow edited Aug 31, 2018 at 12:24 answered Apr 9, 2018 at 7:07 Laurenz AlbeLaurenz Albe 225k1818 gold badges234234 silver badges303303 bronze badges 2 It's the paranoia which make us alive. Despite the jokes, Depending your database size and database integrity ensurement you need, you could write some automation to restore the original database for testing purposes, if it works then your backup have integrity, if it don't then you can check which Wal file is corrupted and recover it from pg_wal dir (it looks like postgresql don't delete it immediately, if I get it). – deFreitas Aug 28, 2018 at 14:28 I think it would be great if postgresql write some tool just to check the Wal files backup and could check if the Wal files set have integrity – deFreitas Aug 28, 2018 at 14:30 Add a comment  | 
I try to configure backuping database in postgresql with pg_basebackup and WAL logs. For now I created full backup once a week and want to backup wal logs too. But, as I understand, posgresql writes them all the time. So, how can I copy them and be shure that they are not corrupted? Thanks
Backup postgresql WAL logs
How does permission got asked again? Short answer: is by asking it again. Some explanation: You can provide an alert dialog or something to demonstrate why you need this permission, also it's not required to do that every time because the user can know which permission required depend on action, however, there is a method I used to check for permissions and it's look like: public class CheckPermissions { public static boolean hasPermission(int PERMISSION_REQUEST, String permission, Context context) { if (ContextCompat.checkSelfPermission(context, permission) != PackageManager.PERMISSION_GRANTED) { // Should we show an explanation? if (ActivityCompat.shouldShowRequestPermissionRationale((Activity) context, permission)&& ContextCompat.checkSelfPermission(context, permission) != PackageManager.PERMISSION_GRANTED){ return false; } else { ActivityCompat.requestPermissions((Activity) context, new String[]{permission}, PERMISSION_REQUEST); } return false; } else { return true; } } } So you can check every time you need. if (CheckPermissions.hasPermission(PERMISSION_REQUEST, Manifest.permission.YOUR_PERMISSION, context)){ // do some thing }
I am seeking to copy my SQLite database to "external" storage as a means of backing up my database file. From there the user can grab the .db file and move to a place deemed safe. In following the Android Developer documentation on getting permission to WRITE_EXTERNAL_STORAGE, I have entered the code shown in verbatim and I stepped through it with the debugger. (I changed the READ_CONTACTS to WRITE_EXTERNAL_STORAGE) It will first checkSelfPermission and skip shouldShowRequestPermissionRationale going to requestPermissions. I'm then given an alert popup asking for my permission. If I deny it, it will then, on the second run through, go to checkSelfPermission again and then go to shouldShowRequestPermissionRationale but skip requestPermissions and terminate not asking for permission again as the Android documentation says it should. My Questions: Am I supposed to add a call to requestPermissions in the shouldShowRequestPermissionRationale if() block to get it to ask again? Or is there another way? If I want to explain the reason or rational for why the permission is needed, am I supposed to implement my own AlertDialog box? Or is there a set method to the ask again AlertBox I haven't discovered yet to provide the system with my explanation? So far everything I have done in the writing of this app has been a learning experience. I'm having a great time. EDIT: I do have the <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" /> set in my Manafest.
How Does Permission get asked Again?
1 Backing up brick alone is not a good idea. To keep it simple, you can run your rsync tool(or any backup tool) from client machine to wherever your destination is. OR You can make use of gluster geo-replication to do the backup. Note here, the backup destination must be a gluster volume. Share Follow answered Feb 1, 2018 at 9:45 kumarkumar 2,55066 gold badges3333 silver badges5858 bronze badges 1 Thanks for the advice. I use a filesystem agent from a commercial backup software. I guess I'll just do two backups, one of one of the bricks and one from one of the clients... – Chris R Feb 2, 2018 at 18:11 Add a comment  | 
So, I have a three node gluster sharing a single volume. There are two clients connecting to that volume. I'm installing my backup agent on all the nodes and clients. I would like to try to reduce duplication of backups not only for space but for network overhead. This is not mission critical data. Would it be sufficient to just back up the brick on the first gluster node and maybe one of the two clients or just the brick? My backup software would be just doing a standard file system backup. I know this is a subjective question but I would just like to get some feedback. Thanks, Chris
Advice on gluster volume backups
1 Use the cbbackupmgr command line tool. "It backs up and restores bucket data, views creation scripts, index creation scripts, bucket configurations, and so on" You can find this utility in the bin folder. For example, here it is on Linux: root@15ca2cdf844e:/opt/couchbase/bin# cbbackupmgr cbbackupmgr [<command>] [<args>] backup Backup a Couchbase cluster compact Compact an incremental backup config Create a new backup configuration help Get extended help for a subcommand list List the archive contents merge Merge incremental backups together remove Delete a backup permanently restore Restore an incremental backup Optional Flags: --version Prints version information -h,--help Prints the help message root@15ca2cdf844e:/opt/couchbase/bin# You can find more information about it in the Couchbase documentation. UPDATE: If you are looking to include/exclude specific buckets, you'll need to look at the cbbackupmgr config. See documentation on cbbackupmgr config. Share Follow answered Jan 24, 2018 at 14:29 Matthew GrovesMatthew Groves 25.6k1010 gold badges7272 silver badges122122 bronze badges Add a comment  | 
How am i backup all data and index from one specified bucket in Couch-base (NoSQL)?
How to backup all data from specified bucket in couchbase (NOSQL)?
1 OK! So I found my issue. Exporting of a mySQL database was failing during the backup. I found this by reviewing the cpanel log files found in /usr/local/cpanel/logs/cpbackup For more information about cpanel logs visit: https://documentation.cpanel.net/display/CKB/The+cPanel+Log+Files Share Follow answered Dec 18, 2017 at 21:21 Michael RiccardiMichael Riccardi 2111 silver badge33 bronze badges Add a comment  | 
I'm backing up 35 or so accounts to Amazon S3. The connection is good, all of the backup files are being written, but afterwards in the backup folder I have a tiny file named 'backup incomplete' and you open it, it shows the date. The WHM/Cpanel side obviously is marking this incomplete, but I'm not sure why as the file sizes seem to be identical to the actual on-server data. I've double checked that there are no disk space issues both on the server and on the destination S3 bucket. I've verified that the backup configuration is correct, and validated the connection to the S3 bucket. I am using vultr VPS for hosting, if that matters. I do have VPS snapshots taking place but they happen about 5 hours after the S3 backup starts. I have 5 other vultr VPS servers setup with this same configuration with no issues. Any ideas on where to look to find why this is happening and resolve it?
Backups marked incomplete, but seem to be full backups, while using vultr VPS with WHM Cpanel destination Amazon S3. How do I resolve this?
1 Oracle Enterprise Manager has a wizard that helps setting up recommended backup strategy. See the following white paper (Oracle RMAN Backups: Pushing the "Easy" Button) for details.. Share Follow answered Dec 9, 2017 at 21:55 Younes El-karamaYounes El-karama 17977 bronze badges Add a comment  | 
This question already has an answer here: How do I manage automatic backups with oracle? (1 answer) Closed 6 years ago. What is the correct and best approach for backing up oracle automatically? Which one should be use for schadule backup: - Using oracle jobs OR - Using os tools like cron Or any other way? Are there any powerful scripts for implement backup strategy ? ( something like Ola Hallengren for SQL server)
Schedule Oracle backup [duplicate]
1 If you are talking about Azure SQL it offers built-in backups up up 90 (not sure, 35 for sure) days without you having to do anything. Take a look here. Share Follow answered Nov 8, 2017 at 14:31 4c74356b414c74356b41 70.4k66 gold badges104104 silver badges145145 bronze badges 1 That is interesting but two things I noticed in just doing it. It creates a new database from the restore and does not overwrite the existing out of the box. I still would need a method with either Azure Powershell, API, or other calls to do this. That is awesome though they included that out of the box now as part of Azure built in. – djangojazz Nov 8, 2017 at 14:54 Add a comment  | 
So I am migrating a database from a local SQL Server 2014 to an Azure database. I migrated successfully using a bacpac and the connections are working fine. However I still want to perform backups and of course the old school file backup does not work. So I hunted around and saw that Microsoft recommends backing up to blob storage. At a cost of course along with other things. My question is, is there an easy way to use SMO references or similar to just remotely make a bacpac stored locally and then used remotely as needed? My real issue is maintainability and if I ever wanted to switch back to local to not be so iron clad bound to azure methods. I feel a bacpac may do this. So far I of course tried using file backups, which of course the syntax won't even work. I then started looking at this link: https://jasonstrate.com/2013/04/18/backing-up-azure-sql-database-to-the-cloud/ and was working on it. Any help or suggestions are appreciated.
Is there a way to programmatically backup and restore Azure database with a bacpac or similar?
We got the solution from the Microsoft Support. In Windows 2012 and 2016, the WBAdmin.etl has been replaced by WBEngine.etl. Modified the code accordingly and the process is working fine now.
We have a system backup process which will create the backup process of all drives. The command is WBADMIN START BACKUP -backupTarget:\\%servername%\sysBackup -include:C:,D:,E:,F: -systemState -quiet -vssFULL In Windows 2008 Server this command is working fine and it is creating the Wbadmin*.etl files in C:\Windows\Logs\WindowsServerBackup. But when we execute the same command (Both as Administrator) in Windows 2016 Server, the Wbadmin*.etl file is not getting generated in the above folder and the process is failing since the etl is not available. Can anybody help me out to sort out this issue? Any help will be much appreciated. Please let me know if the question is not clear or you need any further information.
WBADMIN is not creating the etl file in C:\Windows\Logs\WindowsServerBackup
1 To use the replication feature you need to install/config the replication plugin. it's part of the core plugins so it is packaged within the Gerrit war file and can be installed running a Gerrit initialization (java -jar gerrit.war init). The plugin will mirror all changes to another Gerrit server which will be used as warm-standby backups or a load-balanced mirror. If you're only interested in backup maybe running rsync/mysqldump is a better/simple solution. Unfortunately it's necessary to stop Gerrit before performing the backup to make sure the filesystem and the database are synchronized. You need to execute something like this: service gerrit stop rsync -avh --delete GERRIT-SITE/ SOME-LOCATION mysqldump --host=DB-HOST --port=DB-PORT --user=DB-USER --password=DB-PASS DB-DATABASE > SOME-LOCATION/gerrit-dump.sql service gerrit start You can optimize the time Gerrit will be stopped by running rsync first (with Gerrit up), stop Gerrit and then execute rsync again (the second execution will be very fast). Share Follow answered Sep 11, 2017 at 11:00 Marcelo Ávila de OliveiraMarcelo Ávila de Oliveira 20.9k33 gold badges4242 silver badges5353 bronze badges 2 OK I'm running the gerrit replication in a test enviroment. One problem was, that I've to import the rsa ssh key in the ~/.ssh/known_host, now it replicate new projects on the destionation. One additional question: How to import the changes on the destination automatic? I see the new projects/changes only after restart the gerrit daemon. Thank you so far. – grefabu Sep 11, 2017 at 14:17 Do you mean you need to restart Gerrit everytime you have changes or the replication doesn't work? This is weird... – Marcelo Ávila de Oliveira Sep 11, 2017 at 14:25 Add a comment  | 
we've a running gerrit 2.14.2 with mysql backend. Now we wan't to mirror/backup it. I don't understand the replication feature. Is it necessary to setup a full gerrit/git in the same config as the source instance? The other mentioned way would a DB dump an replay to the other machine and a rsync from the git repository?
gerrit - understanding replication or create a backup
1 At first, use mount or cat /proc/mtd command to identify your block devices. And then copy all image to file: cat /dev/mtd/mtd16ro > /sdcard/my_system_image.bin /dev/mtd/mtd16ro - system partition at my tablet (your may be different) /sdcard/my_system_image.bin - your image at your sdcard Share Follow answered Sep 3, 2017 at 3:55 bukkojotbukkojot 1,52411 gold badge1111 silver badges1616 bronze badges Add a comment  | 
1-I have rooted my tablet(android 5.1.1). 2-I have installed some apps as System app... 3-I need to take a backup of my entire android os, including data_user and system apps, all together. is it possible to take an image of android os including every things?
How to take entirely android os as a image file?
The easiest way to make a copy of a Wordpress site is to use a plugin like Duplicator. It handles everything from copying the files to updating the WP database with the new domain etc. Install the plugin on the website you want to copy (i.e. your development environment) Build the package - this creates a single archive file with the site database and files. Depending on your hosting settings you might get timeout errors when building the package. In that case I exclude the uploads folder and then copy it across manually Copy the archive & installer to the destination environment Create an empty database for your new site Run the installer.php that you uploaded to the destination. It will guide you through entering the new database and domain details And that should be it! You might need to save your permalinks or make some other tweaks depending on your own setup, but its usually that simple.
What's the best way to mirror, when you have a development environment and a live environment with WordPress (including two different URL's). Is it simply by making a backup of one environment (FTP->Data, SQL->Database) and setting it up on the other environment?
How to mirror WordPress development environment and live site?
1 @AjayKumar-MSFT's link to Partial Backups works, but here's the details: create a file called _backup.filter list one directory or file per line Upload _backup.filter file to the D:\home\site\wwwroot\ directory of your site Example: \site\wwwroot\Images\brand.png \site\wwwroot\Images\2014 \site\wwwroot\Images\2013 Share Follow answered Aug 8, 2019 at 20:27 phearcephearce 1111 bronze badge Add a comment  | 
How do I exclude a sub folder/directory from the Azure Backup for an App Service? Our backup system seems to fail because this folder makes the site exceed the backup limit. So I'd like to exclude only that folder. The website + database size exceeds the 10 GB limit for backups. Your content size is 10 GB.
How do I exclude a sub folder/directory from the Azure Backup for an App Service?
1 Here is what I came up with. Not the most elegant, but it got the job done. I've gone through an anonymized the script but it wouldn't be hard to replace the directory paths to what suits your needs. @echo off rem Copies the targeted directory to a folder in my documents and appends the date. rem It also copies it to a backup dir on the shared drive itself rem This part uses the current date a time, but arranges it into a useful manner For /f "tokens=2-4 delims=/ " %%a in ('date /t') do (set mydate=%%c-%%a-%%b) For /f "tokens=1-2 delims=/:" %%a in ('time /t') do (set mytime=%%a%%b) rem This copies the folder from the shared drive to a local copy in Documents. robocopy "Z:\Support\Development" "C:\Users\SmithJ\Documents\Support_Tool\%mydate%" /LOG+:"C:\Users\SmithJ\Documents\Support_Tool\log.txt" rem This copies the same shared folder to a backup folder on the share drive itself just to be sure. robocopy "Z:\Support\Development" "Z:\SmithJ\Support_Tool_Backup\%mydate%" /LOG+:"Z:\SmithJ\Support_Tool_Backup\log.txt" Share Follow answered Jul 19, 2017 at 16:16 pmcfarlandpmcfarland 10511 gold badge33 silver badges1010 bronze badges Add a comment  | 
I used to maintain an in house cmd line tool for support techs. As such I wanted to make sure that I kept a archive of previous versions of the file in case something went sideways. I wanted to create a simple batch file that I could run as a scheduled task every X days to make a copy of the development folder on a network share.
Simple backup script
1 On your job you can add BitBucket credentials. But what I did for mine was making the jobs folder a .git repo and I manually commit to it every so often. You can also make a job to do this for you running a bash script I found online: Bash Script #!/bin/bash # Setup # # - Create a new Jenkins Job # - Mark "None" for Source Control Management # - Select the "Build Periodically" build trigger # - configure to run as frequently as you like # - Add a new "Execute Shell" build step # - Paste the contents of this file as the command # - Save # # NOTE: before this job will work, you'll need to manually navigate to the $JENKINS_HOME directory # and do the initial set up of the git repository. # Make sure the appropriate remote is added and the default remote/branch set up. # # Jenkins Configuraitons Directory cd $JENKINS_HOME # Add general configurations, job configurations, and user content git add -- *.xml jobs/*/*.xml userContent/* # only add user configurations if they exist if [ -d users ]; then user_configs=`ls users/*/config.xml` if [ -n "$user_configs" ]; then git add $user_configs fi fi # mark as deleted anything that's been, well, deleted to_remove=`git status | grep "deleted" | awk '{print $3}'` if [ -n "$to_remove" ]; then git rm --ignore-unmatch $to_remove fi git commit -m "Automated Jenkins commit" git push -q -u origin master Share Follow answered Jul 25, 2017 at 13:16 JEuvinJEuvin 97511 gold badge1313 silver badges3232 bronze badges Add a comment  | 
I have taken a look at several approaches for backing up Jenkins home directory. One of the more intriguing (and seemingly complete), methods for doing so is configuring a Jenkins job which backs up Jenkins home directory to source control. I am trying to get clear on exactly what will need to be performed (via an 'execute shell' build step), in the backup process. We currently use BitBucket for source control (GIT), and have one Jenkins Master with four Jenkins Agent / Slave build nodes. Prerequisites: Initialize Jenkins home directory as a git repo. Steps to Backup Jenkins: CD to Jenkins home directory. Commit all changed files in the Jenkins Home local repository to the master branch of the repository. Push the changes from the local repository to the BitBucket repository. FIN. A most recent copy of Jenkins home is now stored in source control in the event we need a backup. So I have several questions for those whom have used this approach before for automated backups of Jenkins: Is a backup of Jenkins Home directory alone enough to backup Jenkins when a distributed build (agent / slave), system is in place? Should any files / directories be excluded in the backup of Jenkins Home directory? Will initializing Jenkins Home directory as a GIT repository have any adverse effects as far as Jenkins is concerned? I've noticed some tutorials mention creating a user credential Jenkins will use when connecting to BitBucket. How does this work? Any additional advice is also welcome.
Backing up Jenkins home directory with a Jenkins job
I do backups exactly as you want to and it can be done since duplicity has support for instance profile. Make sure to give appropriate access to your role and attach it to your instance.
Goal: Automated full and incremental backups of an AWS EFS filesystem to an S3 bucket. I have been looking at Duplicity/Duply to accomplish this, and it looks like it could work.I do have one concern, you would have to store API keys in the clear on an AMI for this to work. Is there any way to accomplish this using a role?
Duplicity/Duply Backup to S3 without API Keys?
1 It makes sense even though you could consider managing data in container or volume (or host folder mounted in the container). That way the data remains persistent even when you stop and restart the container. what is the approach in doing that checkpoint If your container does not mount a volume, and has its data inside, then yes, stopping and removing a container will lose that data. One possibility is to create that snapshot with docker commit. That will freeze the container state as a new image, that you can run later. Example: $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES c3f279d17e0a ubuntu:12.04 /bin/bash 7 days ago Up 25 hours desperate_dubinsky 197387f1b436 ubuntu:12.04 /bin/bash 7 days ago Up 25 hours focused_hamilton $ docker commit c3f279d17e0a svendowideit/testimage:version3 f5283438590d $ docker images REPOSITORY TAG ID CREATED SIZE svendowideit/testimage version3 f5283438590d 16 seconds ago 335.7 MB Share Follow answered Jun 15, 2017 at 4:50 VonCVonC 1.3m539539 gold badges4.6k4.6k silver badges5.4k5.4k bronze badges Add a comment  | 
I am completely new to docker. I have a quick question about docker images. Assume that I have setup a local docker image with certain software / server installed. So now I would need to set a checkpoint / snapshot here, then all the work done after this checkpoint is temporary; which means at a certain time, I would restore the original image (from that checkpoint) and overwrite everything in the temporary image. My first question is if the above use-case make sense? My second question, if the above make sense, what is the approach in doing that checkpoint (simply how, as I am keeping the checkpoint image in local diskspace only, no cloud repos involved) and how to restore the images to overwrite everything in the temporary image when needed. Though I have read a bit of docker documentation, but am still struggling in the conceptual things.
the approach to restore a pre-configured docker image
$HOME of user postgres pg_dump just echoes to stdout, unless you specify -f -f file --file=file Send output to the specified file. This parameter can be omitted for file based output formats, in which case the standard output is used. It must be given for the directory output format however, where it specifies the target directory instead of a file. In this case the directory is created by pg_dump and must not exist before. (formatting mine) so in your case file backup_db will be in same directory where you were running pg_dump my_db > backup_db next time try specifying full path to know exact location
I want to download dumped PostgreSQL databases from an Ubuntu 16.04 server. sudo su - postgres pg_dump my_db > backup_db Search for the path yields the following: ps auxw | grep postgres | grep -- -D postgres 7311 0.0 0.0 293332 3384 ? S Mai04 0:39 /usr/lib/postgresql/9.5/bin/postgres -D /var/lib/postgresql/9.5/main -c config_file=/etc/postgresql/9.5/main/postgresql.conf Yet I cannot find the dumped files there. What is the location of the dumped files?
PostgreSQL DB dump location
1 The one folder you need to backup is the one referenced by the environment variable JENKINS_HOME It is best to keep that folder separate from the installation folder like C:\Program Files (x86)\jenkins. Then I prefer using a tomcat instance, and copy the jenkins.war in it: it is easier to upgrade: Simply overwrite your jenkins.war with the new version. Tomcat should automatically redeploy the application. Share Follow answered May 24, 2017 at 20:20 VonCVonC 1.3m539539 gold badges4.6k4.6k silver badges5.4k5.4k bronze badges 3 @TaylorLiss directly as java -jar jenkins.war, I suppose? – VonC May 24, 2017 at 20:31 I checked in our configuration page for Jenkins and Home Directory is set to D:/Jenkins. So wouldn't that mean I would need to copy the whole folder (as I've been trying to do?) – Taylor Liss May 24, 2017 at 20:45 1 @TaylorLiss yes, except for the files needed to run jenkins itself (like jenkins.war). That is why I recommend setting the Home Directory for Jenkins not where jenkins.war is. Otherwise, you would get stackoverflow.com/a/38606016/6309 – VonC May 24, 2017 at 21:23 Add a comment  | 
My organization needs to make backups of our heavily customized Jenkins instance. After doing some research on different methods for backing up Jenkins, we decided to go the route of copying the whole Jenkins directory using xcopy and then moving the backup to a new instance on a different machine. (The reason for using xcopy is that its the only way to preserve they symbolic link files within each job.) Here's the steps I have taken: A batch file uses xcopy to copy the entire D:\Jenkins directory on a nightly basis from the old machine I install a fresh instance of Jenkins on a new server I stop the Jenkins service from running I delete the current Jenkins directory in the new machine and then xcopy the backup in its place I attempt to start the Jenkins service and I am met with the following error: The Jenkins service on Local Computer started and then stopped. Some services stop automatically if they are not in use by other services or programs. I have tried running jenkins.war from the command line and that just causes a Jenkins instance to start up that doesn't register as a windows service, and I cannot login to (even after disabling useSecurity), and looks like it doesn't have our modifications present. I have also tried clearing the application log and that did not help. I am not sure how to get the Jenkins service up and running.
Jenkins service won't start after backup
By default, rsync uses the quick check method which only transfers files that differ in size or last-modified time. As you report that the sizes are unchanged, that would seem to indicate that the timestamps differ. Two options to handlel this are: Use -p to preserve timestamps when transferring files. Use --size-only to ignore timestamps and transfer only files that differ in size.
I need to compare two directories to validate a backup. Say my directory looks like the following: Filename Filesize Filename Filesize user@main_server:~/mydir/ user@backup_server:~/mydir/ file1000.txt 4182410737 file1000.txt 4182410737 file1001.txt 8241410737 - <-- missing on backup_server! ... ... file9999.txt 2410418737 file9999.txt 1111111111 <-- size != main_server Is there a quick one liner that would get me close to output like: Invalid Backup Files: file1001.txt file9999.txt (with the goal to instruct the backup script to refetch these files) I've tried to get variations of the following to no avail. [main_server] $ rsync -n ~/mydir/ user@backup_server:~/mydir I cannot do rsync to backup the directories itself because it takes way too long (8-24hrs). Instead I run multiple threads of scp to fetch files in batches. This completes regularly <1hr. However, occasionally I find a few files that were somehow missed (perhaps dropped connection). Speed is a priority, so file sizes should be sufficient. But I'm open to including a checksum, provided it doesn't slow the process down like I find with rsync. Here's my test process: # Generate Large Files (1GB) for i in {1..100}; do head -c 1073741824 </dev/urandom >foo-$i ; done # SCP them from src to dest for i in {1..100}; do ( scp ~/mydir/foo-$i user@backup_server:~/mydir/ & ) ; sleep 0.1 ; done # Confirm destination has everything from source # This is the point of the question. I've tried: rsync -Sa ~/mydir/ user@backup_server:~/mydir # Way too slow What do you recommend?
How can I compare the file sizes match between duplicate directories?
The equivalent of %Userprofile% in PowerShell is $Env:UserProfile. robocopy "$env:UserProfile\desktop" $Destination\$Folder *.* /mir /sec robocopy "$env:UserProfile\pictures" $Destination\$Folder *.* /mir /sec robocopy "$env:UserProfile\documents" $Destination\$Folder *.* /mir /sec If your script is complaining about rights, It's likely that you are running at it via an account that does not have permission to the users folder. User folders are secured to the specific user by ACLs by default. You can likely get around this by running your script with Administrator rights.
Hi I need to backup the userprofile of multiple PCs, can I use some global commands like %Userprofile% for my Code to backup the logged in User. Also why isn't my script properly backing up the folders which I've told him. The output is currently not accesable it just says you need more rights to open these folders. $Destination=Read-Host "Please type the path directory you want to copy the backup files" #destination $Folder=Read-Host "Please type the root name folder" #name of backup folder $validation=Test-Path $Destination #validate the destination if it has the privileges New-PSDrive -Name "Backup" -PSProvider FileSystem -Root $Destination #temporary folder for the backup if ($validation -eq $True){ Set-Location Backup: } else{ Write-Host "Error!Run Script Again" break } robocopy "C:\Users\user\desktop" $Destination\$Folder *.* /mir /sec robocopy "C:\Users\user\pictures" $Destination\$Folder *.* /mir /sec robocopy "C:\Users\user\documents" $Destination\$Folder *.* /mir /sec Function Pause{ Write-Host "Backup Sucessfull!!! `n" } Pause
Userprofile Robocopy Backup
1 hg unbundle /home/jla/solar_capture/.hg/strip-backup/ca681926dad2-a0fffac7-backup.hg Share Follow answered May 2, 2017 at 19:33 John Lawrence AspdenJohn Lawrence Aspden 17.3k1212 gold badges6969 silver badges111111 bronze badges Add a comment  | 
I botched a histedit, and I'd like my original changesets back: $ hg histedit 3 files updated, 0 files merged, 1 files removed, 0 files unresolved saved backup bundle to /home/jla/solar_capture/.hg/strip-backup/ca681926dad2-a0fffac7-backup.hg There appears to be a backup bundle, how do I make it as if I'd done $ hg histedit --keep like I intended to....
How do I recover from a botched histedit in mercurial?
1 Delimine both rsync commands with && operator. This operator ensures, that the second command executes only if the fist one is finished succesfully (returns exit code 0). So, your code will be if [ $MONTH -eq 1] && [ $DAY -eq 1]; then rsync -a --forece --ignore-errors --compare-des=$MONTH_COMPARE $EXCLUDE_STRING $SOURCE_LOC ssh $TARGET_DIR/$LASTYEAR/12 && rsync -a --forece --ignore-errors --delete --update $EXCLUDE_STRING $SOURCE_LOC ssh $MONTH_COMPARE You can also put those rsync lines into standalone script and launch it with cron on desired time, so you would not need the date condition. Share Follow answered Apr 4, 2017 at 9:13 Michal PolovkaMichal Polovka 65011 gold badge1010 silver badges2222 bronze badges 1 Also, chcek if third argument "--forece" is correct, I don't have an *nix machine right now, but it looks like a typo – Michal Polovka Apr 4, 2017 at 9:15 Add a comment  | 
hi i want to create a backup rsync script, and i want to create a additional copy of the sate of one folder at the beginning of the month and compare the changes to the end of the month my questing: ho can i run a rsync job after the previous has finished, so i don't delete the backup dir bevor i compared the changes and backed them up? for now if have: if [ $MONTH -eq 1] && [ $DAY -eq 1]; then rsync -a --forece --ignore-errors --compare-des=$MONTH_COMPARE $EXCLUDE_STRING $SOURCE_LOC ssh $TARGET_DIR/$LASTYEAR/12 rsync -a --forece --ignore-errors --delete --update $EXCLUDE_STRING $SOURCE_LOC ssh $MONTH_COMPARE it is important, that the second you doesn't start bevor the fist one has finished
bash script multiple rsync after previous has finished
run SLAVEOF host port at your local redis run BGSAVE to dump to disk
I have local redis in my ubuntu and remote redis that I am using with aws. I wonder is there any way I can save up data from remote connection in my local ubuntu.
Redis : How to back up redis data from remote to local?
According to your scenario, you could use Azcopy to copy the VHD to another storage account. The data transfer between Azure data center, it cost you only a few minutes. You could use the backup VHD to recreate a new VM. You could use the following commands. AzCopy /Source:https://myaccount.blob.core.windows.net/mycontainer1 /Dest:https://myaccount.blob.core.windows.net/mycontainer2 /SourceKey:key /DestKey:key /Pattern:abc.txt Example AzCopy /Source:https://t5portalvhdsx2463gvmvrz7.blob.core.windows.net/vhds /Dest:https://61portalvhdsbkv71y4r29cs.blob.core.windows.net/vhds /SourceKey:VK8LDCaAudknX9LCGPlpemHzB9FXMKZBJxAY2i8YwWAtUWFti3PKW9iNnFvlGX0TN/csvPkUDbnL22cdro5LPQ== /DestKey:fWjwkegnh7lL84GJIpeAEaLaL5uh+upNXvI4tqgtUa8mw71cJuxv4W1vbzJtSabaj+Cg2E06OSUCIX1BmMH/jg== /Pattern:shui-shui-2017-02-02.vhd You could get source key and dest key on Azure Portal. You could download Azcopy from the link. Note:When you copy VHD, you should stop VM.
I am new to Azure and cloud computing in general. I am about to resize my Linux VM OS disk using: Update-AzureDisk –DiskName "" -Label "ResiZedOS" -ResizedSizeInGB But before that, I want to make sure that I don't lose my data for whatever reason. Therefore, I need a backup of my current OS Disk that I am about to expand. Now, I read a little about the backup services provided by Azure. But can't I just download the OS .vhd file to my local computer and, if needed, upload back and reattach to the VM? Am I getting this right? And if I am, could that be done while the VM is running? The VM is classic deployed Ubuntu.
Downloading Azure portal OS .vhd as a backup
Backup: In yours [Files] section entry, use the external, recursesubdirs, skipifsourcedoesntexist and uninsneveruninstall flags. See Backup files and restore them on uninstall with InnoSetup? Restore: See also the Backup files and restore them on uninstall with InnoSetup? To restore a directory tree, you will need DirectoryCopy function from Inno Setup: copy folder, subfolders and files recursively in Code section.
How can I backup easily all files (in all subdirectories unlimited depth) which already exists in destination directory for installation, and how can I restore all backuped files by uninstallation? Thanks in advance. [Files] Source: "setupData\*"; DestDir: "{app}"
Inno Setup - backup directory tree if exists & restore by uninstall
1 The easiest way (if you know the revisions with "real changes") is: dump everything from 0:509 dump the 18 revisions with --incremental switch dumo everything from 603 to HEAD with --incremental switch transfer everything to new server and load them in same order into new repository Share Follow answered Mar 8, 2017 at 14:49 Peter ParkerPeter Parker 29.4k55 gold badges5353 silver badges8181 bronze badges 9 I only wanted to pull those 18 revisions across anyway at the moment, trying to do some changes to repo layout at the same time while merging illicit repositories (facepalm). Where does the --incremental switch go? on the svnrdump/svnadmin dump command? – Chris Watts Mar 8, 2017 at 15:17 oh yes, --incremental on svnrdump, i shall give it a whirl!! i expected the --drop-empty-revs to whip out what i wanted but it must look for different criteria (of which im not sure on..) – Chris Watts Mar 8, 2017 at 15:29 are the revision really empty? What is the commit message? who did them? normally SVN does not store 0-changes in revisions(however you can trick the client to send such stuff to the server). – Peter Parker Mar 8, 2017 at 16:45 the log dialog in tortoise just shows 18 revisions, no revisions in there, but the svndumpfilter carries all the revisions through, not sure why.. it has me confused thats for sure! – Chris Watts Mar 8, 2017 at 16:48 incremental switch doesnt seem to do it, just to prove I'm not being crazy I have added a screenshot of tortoise log to the question – Chris Watts Mar 9, 2017 at 8:10  |  Show 4 more comments
I have taken a dump of my SVN repository, to migrate to another server, and have noticed that one sub-folder in particular which has 15 revisions changing its content is being pulled across as the full number of revisions for the repository. Is there a way to compact the number of revisions in the dump file? The folder has revisions from 509 > 603, but only has 18 revisions in that range that change its content. I want to remove the other 85 revisions in the range. I have used svndumpfilter with --drop-empty-revs and --renumber-revs, but it only renumbers them to the same revision. My question/problem, is simply how do i compact the revisions in the dump file to only the ones relevant to that folder? I am getting my information as to the "18 relevant revisions" from an svn log of the folder. Hopefully I have provided enough information to be clear what I am trying to do and need help with, please let me know if not and I shall update/edit. Thanks EDIT
compacting revisions in an SVN Dump file
Apparently there was an extra character at the end of my command in the script that windows doesn't show it. I just opened up the script on Mac and saw the extra character and removed it. Boom, got it working!
I have a script to backup my MongoDB. However the script was working fine until now. Every time I run the script, it'll say "done dumping...(x documents)". But when I try to access it from FileZilla, it says /home/ubuntu/temp/bcd.gz: open for read: no such file or directory /usr/bin/mongodump -h $HOST -d $DBNAME -u $USERNAME -p $PASSWORD --authenticationDatabase "admin" --gzip --archive=abc.gz /usr/bin/mongodump -h $HOST -d $DBNAME2 -u $USERNAME -p $PASSWORD --authenticationDatabase "admin" --gzip --archive=bcd.gz My first database always backup properly with no issue at all. The issue is always with $DBNAME2. The user I am using has backup permission. Is there something wrong inside the $DBNAME2 database? If so, how can I fix this?
MongoDB monodump succeeded but can't open the archive
Your code doesn't reset the b value when your if run. You create a new file, but the b variable continues to increase, so your if will execute only once. try add a line inside your if that set b value to 0. if (b == splitFileSize) { output.close(); i++; numberOfPieces--; output = new BufferedOutputStream(new FileOutputStream(fn + i)); b=0; }
I'm working on a school project with a simple backup and restore class in Java. It has two methods, one for backup, and one for restore, that break the file down into sized chunks determined by the program, and using the restore rebuilds the file from those chunks. Specifically, I am stuck on the backup part: I want to split the file into smaller pieces equal in size to partSize, and create output files named filename.1, filename.2, etc. and then return an integer with the count files created. Here is my code: public static int backup(String filename, double partSize) throws Exception { BufferedInputStream input = new BufferedInputStream(new FileInputStream(filename)); int splitFileSize = (int) (partSize * 1024 * 1024); int numberOfPieces = (input.available() / splitFileSize) + 1; String fn = "filename."; int i = 1; int b = 0; BufferedOutputStream output = new BufferedOutputStream(new FileOutputStream(fn + i)); while (numberOfPieces > 0) { output.write(input.read()); b++; if (b == splitFileSize) { output.close(); i++; numberOfPieces--; output = new BufferedOutputStream(new FileOutputStream(fn + i)); } } output.close(); input.close(); System.out.println("Number of files created: "); return i; } Note: In testing, I think its stuck in an infinite loop. Any idea why? Thank you! And thank you for the edit to make things clearer!
Java simple backup and restore Class
1 Lightweight tar are null backup. They contain only a .yml backup report saying there was nothing new to be done. Heavy tar are full backup. They are generated only when something has changed. A change within Gitlab can happen because of an automatic process. This is why you can have a full backup generated even if nobody is connecting to your Gitlab. Share Follow answered Mar 3, 2017 at 13:28 RenaudRenaud 1,9611919 silver badges2121 bronze badges Add a comment  | 
I have a lot of different files in my backup directory for Gitlab. Some of them are 26MB which seems like a whole backup. Others are 10KB. Which one should I keep and why is there two different kind of files ? ls -al backups/ total 34880 drwxr-xr-x 3 uhal uhal 4096 Mar 2 02:00 . drwxr-xr-x 10 uhal uhal 4096 Jun 4 2016 .. -rw------- 1 uhal uhal 10240 Feb 24 02:00 1487898010_gitlab_backup.tar -rw------- 1 uhal uhal 10240 Feb 25 02:00 1487984409_gitlab_backup.tar -rw------- 1 uhal uhal 26716160 Feb 26 02:00 1488070809_gitlab_backup.tar -rw------- 1 uhal uhal 10240 Feb 27 02:00 1488157209_gitlab_backup.tar -rw------- 1 uhal uhal 26716160 Feb 28 02:00 1488243609_gitlab_backup.tar -rw------- 1 uhal uhal 10240 Feb 28 02:00 1488243610_gitlab_backup.tar -rw------- 1 uhal uhal 10240 Mar 1 02:00 1488330010_gitlab_backup.tar -rw------- 1 uhal uhal 10240 Mar 2 02:00 1488416410_gitlab_backup.tar -rw------- 1 uhal uhal 146 Mar 2 02:00 artifacts.tar.gz -rw-rw-r-- 1 uhal uhal 158 Mar 2 02:00 backup_information.yml -rw------- 1 uhal uhal 146 Mar 2 02:00 lfs.tar.gz drwxr-xr-x 4 uhal uhal 4096 Jun 15 2016 tmp UPDATE : March 3rd 2017 tar -xvf 1488502809_gitlab_backup.tar backup_information.yml tar -xvf 1488070809_gitlab_backup.tar repositories/ repositories/marketing/ repositories/thibaut/ repositories/thibaut/jhipster-test.bundle artifacts.tar.gz lfs.tar.gz backup_information.yml
How to deal with Gitlab Backup Files
One of the best way is to use the symlinks. But currently git doesn't support using symlink files as a normal file, see: How does git handle symbolic links?. So the best way to do this is using the git hooks. just create a hook to execute the commands to copy the contents of the required files to the repository folder replacing the existing files. See the git hooks manual: https://git-scm.com/book/uz/v2/Customizing-Git-Git-Hooks Example command: cp -rf ~/FancyEditor/User/superbSettings ~/mySettings See this too: Run script before commit and include the update in this commit?
I am not entirely sure what this feature is called, so I am just going to describe it, and perhaps someone can tell me how to do this. I have a git repo called mySettings where I store my settings for various editors like Sublime. Currently I am manually copying files between my local system and that repo, and then pushing them to the repo. What I would ideally want is some sort of link between my local files and the repo so that every time I run say git add . --all && git commit -m 'updated local files' && git push it automagically pulls in all my local files that I have linked to that repo. An example: Say on my local system I have ~/FancyEditor/User/superbSettings and I have my fancy repo ~/mySettings/ then if I were to run the above git add . --all && git commit -m 'updated local files' && git push inside ~/mySettings it then has some form of link to ~/FancyEditor/User/superbSettings thus relieving me of having to copy it to that repo every time I want to back it up.
Pushing file to git that is not directly in repo with some sort of link
I managed to fix it by reinstalling the ROM from scratch. After a full wipe, and a complete fresh re-install, it started to work perfectly (still on android 7.1.1)
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers. This question does not appear to be about programming within the scope defined in the help center. Closed 5 years ago. Improve this question I saved a backup with titanium, installed a different rom (clean install) and then now I am ready to restore my backup. The problem is: When I open Titanium, it's stuck at "Searching Application Data" message and it won't let me do anything. I already tried: uninstall and reinstall Titanium tried with a older version tried removing SU permission (no message but I can't do anything) tried typing in the terminal as suggested on XDA developers the following su restorecon -FR /data/media/0 with no luck I don't know what else to try. My backup folder has stuff in it, I just need to be able to launch the app... Any idea? Sorry I can't be more precise but I don't really know any more details than that, apart that I am on android 7.1.1. thanks
titanium backup - Stuck on seaching application data [closed]
From within Bluemix, no - there is no API available. Compose is looking to make the traditional Compose UI available from within Bluemix over the course of the next few months, at which time it will be. The documentation for the API can be found here: https://apidocs.compose.com/ (linked from https://help.compose.io)
I want to programmatically trigger a backup in an IBM Compose database. Is there a REST API available for that?
Does IBM Compose provide a REST API to trigger a backup?
1 Documentation of Auto-Backup feature https://developer.android.com/guide/topics/data/autobackup.html#EnablingAutoBackup Share Follow answered Jan 10, 2017 at 17:34 arjunarjun 3,55444 gold badges2828 silver badges4949 bronze badges 1 Link answers the question, but you could have put more effort on answering. Reported. – user10018829 Aug 13, 2018 at 3:22 Add a comment  | 
Closed. This question needs to be more focused. It is not currently accepting answers. Want to improve this question? Update the question so it focuses on one problem only by editing this post. Closed 6 years ago. Improve this question I'm developing an android app and I want to do an autobackup of the database to google drive when you close the app, but I don't have any clue of how to do that. Do you have any tutorial or something so I can learn how to do this backup.
Backup android sqlite database to google drive [closed]
1 OK, so apparently, there is no clean way. You can hack your way to it by editing /usr/sbin/autopostgresqlbackup : OPT="" to OPT="--exclude-table=my_table" but that might get overwritten when the package is updated. Unfortunately, this variable is not exposed in the configuration file (/etc/default/autopostgresqlbackup). Share Follow answered Jan 9, 2017 at 10:20 BaishuBaishu 1,5131313 silver badges1212 bronze badges Add a comment  | 
Is there a way to exclude a table from autopostgresqlbackup backups ? I see the --exclude-table flag in pg_dump, but is there a clean way to use it in autopostgresqlbackup ?
exclude table with autopostgresqlbackup
1 find * -type f -exec cp {} {}_date + "%m-%d-%Y" \; Above command will take back up of all files in that folder. If you want to only take back up of .sh files find * -type f -name "*.sh" -exec cp {} {}_date + "%m-%d-%Y" \; Share Follow edited Dec 8, 2016 at 20:57 SLePort 15.3k33 gold badges3636 silver badges4545 bronze badges answered Dec 8, 2016 at 19:54 PrashanthPrashanth 9177 bronze badges Add a comment  | 
Lets say i have a folder /tmp and you have some files abc.sh, kbc.sh, cdg.sh, nope.py, kim.r, uio.csv. Now if you are copying new versions of abc.sh, kbc.sh from a different server like your prod but you want to take your existing file backups in the same folder like abc.sh-12-08-2016, abc.sh0, abc.sh1, how can you do this in one command. So here is the answer abc.sh2 Above command will take back up of all files in that folder. If you want to only take back up of .sh files abc.sh3 Hope it helps
Finding a File and taking backup in the same directory
I have made a few changes to get relative file path then combine it with source dir, using file class for file deletion etc.. I have changed the order to file deletion first ,otherwise it will cause consistency issue of file not found because folder was deleted before that in some scenario (as you are getting files list recursively using Directory.GetFiles(targetDir, "*.*", SearchOption.AllDirectories); ) // Get existing files in destination string[] existingTargetFiles = Directory.GetFiles(targetDir, "*.*", SearchOption.AllDirectories); // Get existing directories in destination string[] existingTargetDirectories = Directory.GetDirectories(targetDir, "*", SearchOption.AllDirectories); // Compare and delete files that exist in destination but not source foreach (string existingFiles in existingTargetFiles) { if (!File.Exists(existingFiles.Replace(targetDir, sourceDir))) { File.Delete(existingFiles); } } // Compare and delete directories that exist in destination but not source foreach (string existingDirectory in existingTargetDirectories) { if (!Directory.Exists(existingDirectory.Replace(targetDir, sourceDir))) { Directory.Delete(existingDirectory,true); } }
I am trying to make a program to backup my files. I have the copy portion working already but I would like to delete any directory or file not present in the source directory from the destination directory. I am thinking something along the lines of: // Get existing files in destination string[] existingTargetFiles = Directory.GetFiles(targetDir, "*.*", SearchOption.AllDirectories); // Get existing directories in destination string[] existingTargetDirectories = Directory.GetDirectories(targetDir, "*", SearchOption.AllDirectories); // Compare and delete directories that exist in destination but not source foreach (string existingDirectory in existingTargetDirectories) { if (!Directory.Exists(Path.Combine(sourceDir, existingDirectory))) Directory.Delete(Path.Combine(targetDir, existingDirectory)); } } // Compare and delete files that exist in destination but not source foreach (string existingFiles in existingTargetFiles) { if (!Directory.Exists(Path.Combine(sourceDir, existingFiles))) Directory.Delete(Path.Combine(targetDir, existingFiles)); } } Any thoughts on how to make something like this work?
Delete file in target directory only if not present in source directory
1 I've troubles to understand when I should use refresh and/or repair commands in those scenarios According to documentation you should perform refresh when you restore data from a snapshot, the 2nd and the 3rd scenarios. I suppose repair is not required step for all three scenarios. But I would recommend perform it because it is easy and useful step to have consistent data on just restored nodes. Furthermore repair on a regular basis is a recommended part of cassandra cluster maintenance. Share Follow answered Dec 13, 2016 at 14:31 Mikhail BaksheevMikhail Baksheev 1,3941111 silver badges1414 bronze badges 2 Thank you for your answer. But in case of the first scenario, after restarting the node that was shutdown for few hours or days, should I do something else, beside nodetool repair? And for the second one, should I use my snapshot from the lost node or is Cassandra capable of resync everything by itself using the 2 other nodes? – The Wingman Dec 13, 2016 at 14:38 @TheWingman, Repair is enough for the first scenario, even the node was down for a long time. And for the second one, Cassandra can bootstrap data from other nodes (cassandra.apache.org/doc/latest/operating/…) but it can take long time compared to restoring from snashot. – Mikhail Baksheev Dec 13, 2016 at 16:19 Add a comment  | 
I have a Cassandra 3-node cluster and a keyspace created with a replication_factor of 3. I make my backups for this keyspace with nodetool snapshot. As recommended by Cassandra documentation, to make a global backup I start it with a cron job on each node (3 nodes are NTP synchronized). I'm not using incremental snapshots, it's always a new global snapshot. Unfortunately, I've some troubles with the restore process. First of all, I've set a replication factor to 3 (and QUORUM level of consistency on READ and WRITE operations) to make sure my app keeps working even if 1 node is down. My first scenario is not really a restore process: one node goes down because of, let's say the someone or something shutdown the VM that the node was running on. The 2 others nodes keep working and receiving write/read requests. 24 hours later, I manage to restart the VM of the first node, all services and files are still there, and I'm about to restart the node. Are there any actions that I should do before or after the restarting? Second scenario is pretty much the same, but I was not able to recover the VM of the first node and I need to reinstall everything on it, including Cassandra. How should I use my backup to resync this node? Should I even use it or is Cassandra capable to resync everything without me having to restore anything? What should I do precisely in this case? My last scenario is different. I've lost all my nodes and cannot recover anything. I've my global snapshot (3 snapshots, 1 for each node, taken at the same time). What is the process in this case? I've read the Cassandra documentation for the restore process, and I've a preference for the simple copy-restore (in other words, I rather not use sstableloader). I've troubles to understand when I should use refresh and/or repair commands in those scenarios.
Handle different restore scenarios with Cassandra 2.2
Not sure which is the "best", it must surely depend on your budget, development environment, resources, time, etc etc. Here some ideas, I list them in the order of preference, starting with the FREE. "Git" in effect creates a backup, even if strictly speaking its version control tool. You can create and save it on your iCloud Drive, Google Drive, Sky Drive, etc etc, of course you need to manually duplicate it on a regular basis. You can for sure get commercial tools to do this, do a google/wikipedia search, read some reviews. "Time machine" perhaps overkill, although maybe you can tweek it to focus on your projects. "rdist" is UNIX utility that you could setup in a "crontab" to do regular copies of files that changed, although a risky strategy. Just use Finder to manually duplicate the entire folder, you just need to remember to do it.
How is it recommended I back up my Xcode projects? Is there a way to backup the whole thing to the web to be pulled from in case the files are somehow deleted from my computer?
What is the best way to back up an Xcode project
You are asking,how to take TailLog Backup when you don't have access to MDF files.. This works only if your database is not in BulkLoggedRecovery model or your log doesn't have Bulk logged transactions..This has been covered in depth here: Disaster recovery 101: backing up the tail of the log Here are the steps in order Create a dummy database with same names Delete all files of this dummy database,by bringing it offline Copy the original database LDF Bring this database online which will fail.. Now you can take TailLog Backup using below command.. BACKUP LOG dummydb TO DISK = N'D:\SQLskills\DemoBackups\DBMaint_Log_Tail.bck' WITH INIT, NO_TRUNCATE; GO Now since you have all the backups,you can restore to point in time of Failure
Given the following (hypothetical) scenario, how would one best backup/restore the database. Daily doing full backups @ 12 am. Hourly doing differentials 1 am, 2am etc Transaction log backups on the half hours, 130am, 230am etc I am also storing the active .ldf file on drive X and the .mdf on drive Y. Also important the master db is on Y. Lets say hypothetically the Y drive fails at 245am. I have the full, diffs and transaction logs up until 230am. BUT I also have the .ldf. In theory I would have to probably reinstall SQL Server. Then I would want to recover that database up until 245am. I have heard of doing a tail-log backup on a restore operation BUT I don't have the .mdf anymore. So, I would need to create a new database from my full/diff/log backups. After that I'm not sure how to proceed to get that last 15 minutes of transactions. I hope this is making sense... Thanks! Steve.
Disaster Recovery - Restoring SQL Server database without MDF
When you delete an FTP account you can choose to delete it's home folder. If you delete an FTP account that has public_html as its home folder that would mean you deleted the whole site. This may not be exactly what happened in your case but if not that, something similar. public_ftp is typically used for anonymous users, so the fact that you are ending up there indicates there is something wrong with the permissions on the FTP account you created.
I created an ftp account for a user to access the public_html folder and used filezilla to login in and double check to make sure it took me to the right directory which it did not. I went back to double check the path and found that the public_html folder is gone and is replaced with public_ftp. Obviously there is no site live now since the entire folder is gone. I know I did not delete anything with %100 certainty and aside from loggin in with filezilla no other action was taken on the ftp part of things. I called hostgator and they say they cannot find the folder anywhere. I wonder if hostgator support had done something inadvertently that caused this. Has anyone gone through this or knows where I could look to find the folder. The whole thing does not make any sense to me, how could it be gone when I know nothing was deleted. I never gave access to the site to anyone. It is a wordpress site if that has any significance.
public_html folder replaced with public_ftp in cPanel
1 yes, if you have local or otherwise access to this GlusterFS storage and eg. assuming it would be mounted under /mnt/backup/ you can list and restore from it using the file target. i'd suggest listing/grepping first and when you found the backup repo containing your folder restoring from there repeat this for every backup folder (note the three slashes in the target url for an absolute file path) duplicity list-current-files [options] [--time time] file:///mnt/backup/server[num]/duply/www/ then restore when you found the folder (use the folder name as shown in the listing, a relative path to the backup base folder) duplicity [restore] [options] --file-to-restore www.example.com [--time time] /mnt/backup/server[num]/duply/www/ /var/www/www.example.com parameters in [square brackets] are optional. hope that helps.. ede/duply.net Share Follow answered Nov 4, 2016 at 12:35 ede-duply.netede-duply.net 51822 silver badges55 bronze badges 1 Sounds good. Unfortunately, the listing command is now copying all signature files from the GlusterFS storage to the local cache to get the listing. This takes a lot of time. I'm already waiting for more than 15 minutes for only one server... Can't I say, that it has to read it from the remote side? – Sebbo Nov 4, 2016 at 14:10 Add a comment  | 
I've multiple virtual machines and every machine is backuping to the same backup storage (GlusterFS; it's mounted using fuse), but into different backup directories. /var/www/ for example will be backuped to this directory: /mnt/backup/server001/duply/www/ /mnt/backup/server002/duply/www/ /mnt/backup/server003/duply/www/ ... /mnt/backup/serverXXX/duply/www/ ... As backup software, I'm using duply / duplicity. If I know, on which server the folder or file was, I can connect to this server and run this command to restore it: duply www-backup fetch www.example.com/ /var/www/www.example.com/ 1D Unfortunatley, this does only work for files and directories, which were backuped from/on this server. Today, I've the problem, that I've deleted a while a go a website and can't remember myself, on which server it was hosted. Well... This means, I would need to connect to each single host and need to run the restore command to check, if it's available from there or not. I don't have the time to do this on each single server. Well... Is there a way to fetch it from the backup anyhow using file://, even when I don't know, where it is? I've already tried to fetch it using the following command, but it didn't worked, because the directories does not contain any backup chain: duplicity --force file:///mnt/backup/ /var/www/www.example.com/ Any ideas? # duplicity --version duplicity 0.6.23
How-to restore duplicity/duply backup from unknown archive
1 You're still using PowerShell v2 or earlier. These early versions don't support member enumeration on arrays, which would allow you to access properties of array elements via the array object itself. Instead you get an empty result, because the array object does not have a property DirectoryName or FullName. If you can't upgrade to at least PowerShell v3 you can work around this issue with a loop: $file | ForEach-Object { $_.FullName } or by expanding the property: $file | Select-Object -Expand FullName Share Follow edited Dec 28, 2016 at 13:01 answered Sep 20, 2016 at 9:31 Ansgar WiechersAnsgar Wiechers 196k2626 gold badges271271 silver badges342342 bronze badges Add a comment  | 
I encountered a problem in PowerShell in listing its child items. $file = Get-ChildItem \\compname\c$\folder\ -Recurse -Filter *filename.txt* | Select-Object -Property DirectoryName, FullName When I try this to get its objects it was empty: $file.FullName or $file.DirectoryName If there is a many files in that directory with the same file name, how can I backup up those files in the same folder by by adding .bak on its file extension.
Use child-item result object in array
1 Finally found the solution. Keep in mind, I use a german version of windows. Hence, the menu point names may not be accurately translated. In the control center > system and security > file history > extended settings There is a link to the event viewer related to backup events. It had some messages telling it could not save a file in a given folder. Deleting the folder, and also the BACKUP-DRIVE:\FileHistory folder on the backup drive solved the issues. Share Improve this answer Follow answered Jul 8, 2016 at 10:07 andreasandreas 1,52522 gold badges1515 silver badges3737 bronze badges 1 1 The correct english version of the path would be: Control Panel\System and Security\File History\Advanced Settings. – Elias Feb 1, 2022 at 8:38 Add a comment  | 
I have an Windows box, which originally had Windows 7 installed and was later on upgraded to Windows 10. On Windows 10 I added a 3TB hard disk for backups. I set up the new hard disc as backup target and started the first backup. This now is some days ago, but whenever I check the status I get following message: File History is saving copies of your files for the first time. On the backup disc nothing happens. There is created a folder M:\FileHistory\username\boxname\Data, but there are no files in it. There is just a link (for whatever reason not a button) below, to stop the backup. When I restart it, I get the same message but nothing happens. Anybody any idea, what could be the reason or what to do? Thanks in advance...
File History first backup never succeeds. Nothing happens
Unneeded commands in your code If you are using LAUNCH EXTERNAL PROCESS to do the backup then you do not need the PgSQL CONNECT and PgSQL CLOSE. These plug-in commands do not execute in the same context as LAUNCH EXTERNAL PROCESS so they are unneeded in this situation. Make sure you have write access If the 4D Database is running as a Service, or more specifically as a user that does not have write access to C:\Users\Admin_user\..., then it could be failing due to a permissions issue. Make sure that you are writing to a location that you have write access to, and also be sure to check the $out and $err parameters to see what the Standard Output and Error Streams are. You need to specify a password for pg_dump Another problem is that you are not specifying the password. You could either use the PGPASSWORD environment variable or use a pgpass.conf file in the user's profile directory. Regarding the PGPASSWORD environment variable; the documentation has the following warning: Use of this environment variable is not recommended for security reasons, as some operating systems allow non-root users to see process environment variables via ps; instead consider using the ~/.pgpass file Example using pgpass.conf The following example assumes you have a pgpass.conf file in place: PgSQL CONNECT0 Example using PGPASSWORD environment variable The following example sets the PgSQL CONNECT1 environment variable before the call to pg_dump and then clears the variable after the call: PgSQL CONNECT2 Debugging Make sure to use the debugger to check the PgSQL CONNECT3 and PgSQL CONNECT4 to see what the underlying issue is.
I am using 4D for front-end and postgresql for back-end. So i have the requirement to take database backups from front-end. Here what i have done so far for taking backups in 4D. C_LONGINT(i_pg_connection) i_pg_connection:=PgSQL Connect ("localhost";"admin";"admin";"test_db") LAUNCH EXTERNAL PROCESS("C:\\Program Files\\PostgreSQL\\9.5\\bin\\pg_dump.exe -h localhost -p 5432 -U admin -F c -b -v -f C:\\Users\\Admin_user\\Desktop\\backup_test\\db_backup.backup test_db") PgSQL Close (i_pg_connection) But the it's not taking the backup. The backup command is ok because it works perfectly while firing on command prompt. What's wrong in my code? Thanks in advance.
Backup postgresql database from 4D
1 You are trying to restore a SQL Server 2000 database on SQL Server 2014. This is not supported. You will need to restore your database on an instance of SQL Server 2005, 2008 or 2008 R2 first, then back it up from there, then restore the new backup on SQL Server 2014. Microsoft explains this here how to on SQL Server 2012. Share Improve this answer Follow answered Jun 8, 2016 at 7:42 TudTud 1122 bronze badges Add a comment  | 
I'm trying to restore an old database backup in SQL Server 2014, and I'm getting the error below: How can I go through this? I'm importing the backup in the follow way: Tasks -> Restore -> Database Select device option -> Pick up the .bak file At the options select Overwrite ok Thanks in advance.
SQL Server 2014: Restore old backup
I would assume your only hope is to try and find your site on the Wayback Machine. The Wayback Machine is an amazing resource that has been archiving the broader internet since the late 90's. Its not a perfect copy but for static sites like blogs with enough traffic to have been noticed by the web spider that archives it can work rather well. It's also a cool nostalgic trip down memory lane :)
I had a good active site about an two year ago, I was running that site actively, I wrote more than 300 articles on that site and was getting 1k+ traffic daily. Last year, I was too busy with study and other stuff, I totally forgot about site and even didn't renewed the domain and hosting. My hosting account got locked since I wasn't paying the bills. This year I am free and started working online, I have renewed my hosting and activated my domain. I asked 1and1 customer support to give me backup of everything I had on the site, they say 'there are no longer available backups on the server for the deleted contract'. I badly need all of those contents I wrote on that site. The site was created with WordPress and I used Cloudflare CDN, I haven't any backup of the site on my computer. I am wondering, is there any way to get backup of my site from anywhere?
How can I get backup of my 1 year old site?
ERROR: type should be string, got "\nhttps://git-scm.com/book/en/v2/Git-on-the-Server-Setting-Up-the-Server\nIn the side of the server:\ngit init --bare\n\nMeaning:\n\nYou create the repository\n\nIn the side of the client:\ngit remote add name_url\ngit push origin master\n\nMeaning:\n\nYou include the remote server\nYou push your local content to the server\n\nAlternative you can copy the project that you have in your client (including the .git) into the server. This is not a very good idea if the version of git is not the same in the server and the client.\n"
This question already has answers here: How to restore the project if I have only .git folder? (2 answers) Closed 6 years ago. Due to how a (now deceased) server was organized, we lost the os and most git configs, but have 100% healthy and intact (according to git fsck --full performed after connecting the surviving drive to a different machine) .git folder for the projects (4 projects) Now we have a new server. How do we restore from .git folders? None of us are git professionals, so pardon if it is a stupid question
restoring a repository from a .git folder [duplicate]
Why are you trying to reinvent the wheel ? $folder = __DIR__ . DIRECTORY_SEPARATOR . 'Backup_Files/'; if (!is_dir($folder)){ mkdir($folder, 0777, true); $date = date('m-d-Y'); exec("mysqldump --user=... --password=... --host=... DB_NAME > {$folder}db_backup_{$date}.sql"); }
following is the code I found over the internet to get the database backup using php. code works fine in the local host. but when it is in the online server, it does not works properly. it creates a database .sql file but the file is empty. what is the problem?? I think it is a server permission problem. how can I make this code works on my online server.??? :( <?php $con = mysqli_connect("localhost","root","","dhn_online_db"); $tables = array(); $query = mysqli_query($con, 'SHOW TABLES'); while($row = mysqli_fetch_row($query)){ $tables[] = $row[0]; } $result = ""; foreach($tables as $table){ $query = mysqli_query($con, 'SELECT * FROM '.$table); $num_fields = mysqli_num_fields($query); $result .= 'DROP TABLE IF EXISTS '.$table.';'; $row2 = mysqli_fetch_row(mysqli_query($con, 'SHOW CREATE TABLE '.$table)); $result .= "\n\n".$row2[1].";\n\n"; for ($i = 0; $i < $num_fields; $i++) { while($row = mysqli_fetch_row($query)){ $result .= 'INSERT INTO '.$table.' VALUES('; for($j=0; $j<$num_fields; $j++){ $row[$j] = addslashes($row[$j]); $row[$j] = str_replace("\n","\\n",$row[$j]); if(isset($row[$j])){ $result .= '"'.$row[$j].'"' ; }else{ $result .= '""'; } if($j<($num_fields-1)){ $result .= ','; } } $result .= ");\n"; } } $result .="\n\n"; } //Create Folder $folder = 'Backup_Files/'; if (!is_dir($folder)) mkdir($folder, 0777, true); chmod($folder, 0777); $date = date('m-d-Y'); $filename = $folder."db_backup_".$date; echo $filename; $handle = fopen($filename.'.sql','w+'); fwrite($handle,$result,0777); fclose($handle); ?>
Get the database backup using PHP
1 In the current release you need to code it yourself but the next Enterprise release of Wakanda ( 1.1.0 ) will contain a new administration console that you can use to run jobs periodically. It should be released in a couple of weeks. Note: subscribe to the newsletter and twitter account to receive notifications of new releases. Share Improve this answer Follow edited Mar 30, 2016 at 11:18 answered Mar 23, 2016 at 8:55 hamzahikhamzahik 71444 silver badges77 bronze badges Add a comment  | 
Or is it neccessary to do it by code, ie. using a worker to do a backup with "ds.backup();" every 24 hours? Edit: It seems like activating the the database journal also does a backup. But is there a setting for backup interval, etc.?
Is there a way to turn on automatic backups in Wakanda 11?
Installing Android SDK 23 fixed my issue.
Why am I getting this error? No resource identifier found for attribute 'fullBackupContent' in package 'android Here's a snippet of my manifest file: <manifest xmlns:android="http://schemas.android.com/apk/res/android" ... <application android:allowBackup="true" android:icon="@drawable/time_machine_launcher" android:label="@string/app_name" android:theme="@style/AppTheme" android:name="com.friscosoftware.timelytext.TimelyTextApplicationBasic" android:fullBackupContent="@xml/backupscheme" > The file backupscheme.xml resides in my res/xml folder. I'm trying to follow the instructions at Configuring Auto Backup for Aps
No resource identifier found for attribute 'fullBackupContent' in package 'android
I found the solution: in linux server works different that windows, we need use //for linux server String[] cmdarray = {"/bin/sh","-c",command}; process = Runtime.getRuntime().exec(cmdarray);
I'm working in a linux-server with wilfly. when java is train to execute the command: String command = "mysqldump -h "+ hostDB +" -u "+ dbUsername +" "+ dbPassword +" "+ nameDB +" -r \""+ path + backUpFile+"\""; Process process = Runtime.getRuntime().exec(command); it's returning mysqldump: Can't create/write to file '"/usr/share/wildfly/wildfly-9.0.0.Final/standalone/data/dbBackup/20160301_151254.sql"' (Errcode: 2) but when I execute the line in the linux-server shell is working fine. mysqldump -h xx.xx.xx.xx -u username"-pxxx" database -r "/usr/share/wildfly/wildfly-9.0.0.Final/standalone/data/dbBackup/20160301_151254.sql" the host is in different machine. the same code is working in my local machine in windows.
mysqldump (Errcode: 2) trying do backup from java in linux (server) with wildfly
Open a regular CMD windows, and type java -version. If this work, then from that same CMD window, cd to where git is installed, and type git-bash: that will open a bash session that should inherit the PATH from the CMD window. Confirm by typing java -version. Finally, cd to where your project is, and type your java command.
I try to save my h2-db-tables into a zip-file by using the command: java -cp C:/User/Adrian/.m2/repository/com/h2database/h2/1.4.189/h2-1.4.189.jar org.h2.tools.Script -url jdbc:h2:~/test -user sa -script test.zip -options compression zip but i always get the error : java : cammand not found. I'm working with git bash on windows and included the java path to its bin-folder.
Backup using the Script Tool of h2 via shellscript in Git bash
1 Using tar for creating an archive -T option can be used to take files from an input file. input.txt: /Downloads /etc/var /etc/log/test.txt script: #!/bin/sh archive="example.tgz" dest="Archive" path="input.txt" tar czf $dest/$archive -T $path Share Improve this answer Follow answered Jan 4, 2016 at 15:07 KadirKadir 1,67222 gold badges1919 silver badges2222 bronze badges 2 This is much better then using array right? and will the -T add all files of input in one archive? – LinuxWar Jan 4, 2016 at 15:15 @linuxwar Yes, this would create one archive. – Kadir Jan 4, 2016 at 15:17 Add a comment  | 
I am building a script to back-up files and directories in my linux. These backups are stored in the map Archive. Now i use in my source a specific directory but i have e.g. input.txt with these directories: /Downloads /etc/var or also with specific files /etc/log/test.txt Using /Downloads or /etc/var it must also backup the subdirectories of these directories. E.g. /Downloads has 3 other directories /dir1,/dir2,/dir3 these need to be archived also. How can i use that file input.txt as source? Of the whole input.txt file I have to make 1 archive. #!/bin/sh source="/Downloads" archive="example.tgz" dest="Archive" path="input.txt" tar czf $dest/$archive $source ls $dest #this is how i read my file while read line do echo $line done < $path
Shell script to backup using input list
There are several options that are better than using svnsync: When you need to have a live backup or DR server for your Subversion repositories, consider deploying a replica (slave server) using the Multisite Repository Replication (VDFS) feature. For the more detailed getting started guidance please consider the KB136: Getting started with Multisite Repository Replication and KB93: Performing disaster recovery for distributed VDFS repositories articles. When you want to backup your Subversion repositories, use the built-in Backup and Restore feature. The Backup and Restore feature helps you make daily backups of the repositories of any size and does not have any impact on performance and user operations. What is more, the Backup and Restore feature in VisualSVN Server is very easy to setup and maintain. Please, read the article KB106: Getting Started with Backup and Restore for setup instructions. Don't forget to create background verification jobs, too. See the article KB115: Getting started with repository verification jobs.
I want use svnsync for create a back-up copy of my svn repository. The svnsync command replicates all versioned files, and their attendant svn properties. It does not copy hooks or conf directories from the source repository, so I will need to do those manually. I have wrote this batch script for create the backup repository: setlocal enableextensions enabledelayedexpansion :: create local backup project svnadmin create C:\Repositories\Test :: initialize local backup project with remote project svnsync initialize https://local.com/svn/Test https://remote.com/svn/Test :: get remote info project and store in info.txt svn info https://remote.com/svn/Test>info.txt :: get remote UUID project from info.txt set UUID= for /f "delims=" %%a in (info.txt) do ( set line=%%a if "x!line:~0,17!"=="xRepository UUID: " ( set UUID=!line:~17! ) ) :: set local UUID project with the remote UUID project svnadmin setuuid C:\Repositories\Test %UUID% :: sync local and remote project svnsync sync https://local.com/svn/Test https://remote.com/svn/Test endlocal I have wrote this batch script for synchronize the backup repository with master repository (hourly schedule): svnsync sync https://local.com/svn/Test https://remote.com/svn/Test Say, if I set up a mirror svn repo via svnsync, and if my production svn server crashed, then how can i restore the production svn repo via the mirror? Is it possible? Can someone suggest the best practice of backing up the production server using svnsync?
use svnsync for backup
Try using find. Something like this: find . -ctime -10 That will give you a list of files and directories, starting from within your current directory, that has had its state changed within the last 10 days. Example: My Downloads directory looks like this: kobus@akira:~/Downloads$ ll total 2025284 drwxr-xr-x 4 kobus kobus 4096 Nov 4 11:25 ./ drwxr-xr-x 41 kobus kobus 4096 Oct 30 09:26 ../ -rw-rw-r-- 1 kobus kobus 8042383 Oct 28 14:08 apache-maven-3.3.3- bin.tar.gz drwxrwxr-x 2 kobus kobus 4096 Oct 14 09:55 ELKImages/ -rw-rw-r-- 1 kobus kobus 1469054976 Nov 4 11:25 Fedora-Live-Workstation-x86_64-23-10.iso -rw------- 1 kobus kobus 351004 Sep 21 14:07 GrokConstructor-master.zip drwxrwxr-x 11 kobus kobus 4096 Jul 11 2014 jboss-eap-6.3/ -rw-rw-r-- 1 kobus kobus 183399393 Oct 19 16:26 jboss-eap-6.3.0-installer.jar -rw-rw-r-- 1 kobus kobus 158177216 Oct 19 16:26 jboss-eap-6.3.0.zip -rw-rw-r-- 1 kobus kobus 71680110 Oct 13 13:51 jre-8u60-linux-x64.tar.gz -rw-r--r-- 1 kobus kobus 4680 Oct 12 12:34 nginx-release-centos-7-0.el7.ngx.noarch.rpm -rw-r--r-- 1 kobus kobus 3479765 Oct 12 14:22 ngx_openresty-1.9.3.1.tar.gz -rw------- 1 kobus kobus 16874455 Sep 15 16:49 Oracle_VM_VirtualBox_Extension_Pack-5.0.4-102546.vbox-extpack -rw-r--r-- 1 kobus kobus 7505310 Oct 6 10:29 sublime_text_3_build_3083_x64.tar.bz2 -rw------- 1 kobus kobus 41467245 Sep 7 10:37 tagspaces-1.12.0-linux64.tar.gz -rw-rw-r-- 1 kobus kobus 42658300 Nov 4 10:14 tagspaces-2.0.1-linux64.tar.gz -rw------- 1 kobus kobus 70046668 Sep 15 16:49 VirtualBox-5.0-5.0.4_102546_el7-1.x86_64.rpm Here's what the find returns: kobus@akira:~/Downloads$ find . -ctime -10 . ./tagspaces-2.0.1-linux64.tar.gz ./apache-maven-3.3.3-bin.tar.gz ./Fedora-Live-Workstation-x86_64-23-10.iso kobus@akira:~/Downloads$
I am trying to obtain a backup of 'newly' added files to a Fedora system. Files can be copied through a Windows Samba share and appear to retain the original created timestamp. However, because it retains this timestamp I am having issues identifying which files were newly added to the system. Currently, the only way I can think of doing this is to have a master list snapshot of all the files on the system at a specific time. Then when I perform the backup I compare the previous snapshot with a current snapshot. It would detect files that were removed from the system but it seems excessive and I was thinking there must be an easier way to backup newly added files. Terry
Linux: Finding Newly Added Files
It looks like your sudoers file is preventing you from running that command as sudo. Check your /etc/sudoers file and read the sudo documentation. Also "-l" isn't a valid command. sudo takes -l as an optional flag (which lists commands allowed by the user). But Fabric's sudo appears to be taking unknown strings and routing them through /bin/bash instead of using them directly as sudo command parameters.
I'm pretty new to python and fabric and I am trying to do a simple code where I can get the output on two hosts that uses sudo, although I keep getting an error.... Can anyone help me out with what I might be missing ? My code: from fabric.api import * from getpass import getpass from fabric.decorators import runs_once env.hosts = ['host1','host2'] env.port = '22' env.user = 'username' env.password="password" def sudo_dsmc(cmd): sudo("-l") When I run: fab sudo_dsmc:"-1" : MacBookPRO:PYTHON username$ fab sudo_dsmc:"-l" [host1] Executing task 'sudo_dsmc' [host1] sudo: -l [host1] out: sudo password: [host1] out: Sorry, user username is not allowed to execute '/bin/bash -l -c - l' as root on host1. [host1] out: Fatal error: sudo() received nonzero return code 1 while executing! Requested: -l Executed: sudo -S -p 'sudo password:' /bin/bash -l -c "-l" Aborting. Disconnecting from host1... done. Although I can run the apt-get update with my below function fine without any errors: def sudo_command(cmd): sudo("apt-get update") # run like: fab sudo_command:"apt-get-update"
Python - Using Fabric with Sudo
If a row changes while the backup is going on, the new value may or may not be in the backup. This is generally OK because RethinkDB only offers single-row atomicity anyway, but if you have a workload where that isn't OK then your other options are to use a filesystem that lets you snapshot the data on disk, or to add a new server to your cluster and set it as a replica of the table you want to back up. It collects data from all shards. It can take a very long time.
I read article about backing up data, but some issues is not clear for me: What happens with data, that will be changed after backup process was started? Does backup operation work only on current machine? Or will it collect data from all shards in cluster? If only on current, should I start backup process on all servers? Is it slow operation so I should forbid all operation to db while backup in progress?
RethinkDB backup data
1 Data stored "at rest" in Netezza are compressed, as they are also in backups. However, the compression algorithms & methods employed are different. That being said, one can typically expect that the size and compression ratio of the two to be roughly the same. Share Improve this answer Follow answered Oct 9, 2015 at 20:12 ScottMcGScottMcG 3,88722 gold badges1212 silver badges2121 bronze badges 1 Thanks Scott for the info. I found that in nz_backup script there is an option -sizedata, which calculate the size taken by backup image if we do actual backup.for any database. – Kapish Kumar Oct 16, 2015 at 6:28 Add a comment  | 
May I know what would be roughly ideal ratio between full backup image size and actual database size in netezza. Few things I know is, it depends on actual data in the tables, internal compression, number of tables in the db etc.
What is the relation of backup image size and actual database size in Netezza