Response
stringlengths
8
2k
Instruction
stringlengths
18
2k
Prompt
stringlengths
14
160
Currently you are telling it to copy everything up to the underscore, if one exists, else it will create one if it does not. That way the text after underscore is ignored. try a for loop @echo off mkdir "_z">nul 2>&1 for %%i in (*.doc) do copy "%%~i" "_z\%%~ni_%date:/=-% %time::=-%%%~xi"
I have this small script that is intended to backup all *.doc files in the folder into a _z subfolder. I have just noticed it does not work properly, when _ is part of the filename of the *.doc that is supposed to be backed up. Could anyone kindly explain to me the source of the issue and perhaps advice on how to correct it, please? mkdir _z copy *.doc _z\*"_""%date:/=-%"" ""%time::=-%".doc
Batch-file: how to backup all .doc files to a folder
The master key is used to protect the certificate used for the backup encryption. The certificate is unique - specifying same name for executing the CREATE CERTIFICATE statement creates different certificate. If this was not true, everyone with access to your backup would be able to decrypt it. So, you need to export your certificate: -- Backup the certificate BACKUP CERTIFICATE BackupEncryptCert TO FILE = 'H:\MSSQL\Backup\Keys\BackupEncryptCert.cer' WITH PRIVATE KEY (FILE = 'H:\MSSQL\Backup\Keys\BackupEncryptCert.PrivateKey.pvk', ENCRYPTION BY PASSWORD = 'doodle$7') GO and then to create it on the new instance: -- Restoring a certificate from existing certificate backup CREATE CERTIFICATE BackupEncryptCert FROM FILE = 'H:\MSSQL\Backup\Keys\BackupEncryptCert.cer' WITH PRIVATE KEY (FILE = 'H:\MSSQL\Backup\BackupEncryptCert.PrivateKey.pvk', DECRYPTION BY PASSWORD = 'doodle$7') GO Then you will be able to restore your encrypted backup.
I use this code USE master; GO CREATE MASTER KEY ENCRYPTION BY PASSWORD = '1234'; GO CREATE CERTIFICATE Cer WITH SUBJECT = 'Hello'; GO and then backup database with this code backup database Temp to disk = 'D:\Backup\temp.bak' WITH COMPRESSION, ENCRYPTION ( ALGORITHM = AES_256, SERVER CERTIFICATE = Cer ), STATS = 10 now I can not restore it to another server i create this master key on another server but it dose not work
Restore encrypted Backup in SQL
0 Had to search it myself, but to exclude all ".jpg" files from backup, use this line: *.jpg Do that for every file type / extension, 1 for each line. Source: Link to Source Share Improve this answer Follow answered Mar 5, 2022 at 8:31 BombelmanBombelman 1681313 bronze badges 1 please read the question properly. i want to exclude all files except .php and *.js; excluding ".jpg" is already in my list. – sifr_dot_in Mar 5, 2022 at 14:55 Add a comment  | 
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers. This question does not appear to be about programming within the scope defined in the help center. Closed 2 years ago. Improve this question my "cpbackup-exclude.conf" has below lines that exclude images: /home/public_html/*/*.jpg /home/public_html/*/*.jpeg /home/public_html/*/*.webp /home/public_html/*/*.png /home/public_html/*/*.gif i am using shared server. mostly i make changes in ".php" and ".js" files. so why entire backup? thus i want to backup only ".php" files and ".js" files. as there are hundreds of file types (have different extensions), adding them will make the list very long. is there any short statement for this?
cpanel backup 'cpbackup-exclude.conf' exclude all files except .php and .js [closed]
0 I came across "AOMEI Backupper". It does the job, and actually did save me from losing 6 months of work (from home). However, I ended up just buying a bigger SSD and backing up to an external SSD (so I have a portable copy if needed). AOMEI works, but is a little "rough around the edges", if you know what I mean.. Share Improve this answer Follow answered Feb 5, 2021 at 3:02 derelictderelict 3,80633 gold badges2727 silver badges2929 bronze badges Add a comment  | 
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers. This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered. Closed 3 years ago. Improve this question Is it possible? I've tried adding the external drive (i.e., drive to be backed up) in Windows backup options, but I just get an error (0x80070032). If it matters, I'm using an HDD for backup, and an SSD that needs to be backed up.
Backup an external drive to an external drive - windows 10 [closed]
0 I've encountered the same problem and I am currently researching for a solution. I created a backup from CPanel at SiteGround and attempting to move it over to GoDaddy via CPanel restore. Share Improve this answer Follow answered Jul 16, 2020 at 16:53 JoeJoe 1 Add a comment  | 
i have been trying to restore my home directory backup for a while now but i keep getting this error Error: SyntaxError: JSON.parse: unexpected character at line 1 column 1 of the JSON data it is a .tar.gz file. I don't know what to do. Please help with an answer.
Error: SyntaxError: JSON.parse: unexpected character at line 1 column 1 of the JSON data cpanel backup restore
The easiest solution is to use get command with -neweronly switch: open sftp://user:[email protected]/ get -neweronly /remote/path/* C:\local\path\* For a very similar results, you can also use synchronize command: synchronize local /remote/path C:\local\path If you want to synchronize only recent files, you can add -filemask=>2D or similar. See https://winscp.net/eng/docs/file_mask#size_time
I have call recordings on AWS but I'm only given 10 GB of space so I would like to have a script which will SFTP to the Amazon AWS and only copy the newest files to my NAS sitting in the office. I can than use Windows Task Scheduler to run the scripts once a week. The idea is that after all space on AWS is populated it starts to deleting the oldest recordings, so I want to prevent it and only use AWS as a buffer to store the current recordings and my NAS as a main storage. My script will be running every week/month to copy only the newest files in order to prevent loosing any recordings. Below I have described the logic of how I would like the script to behavior: Establish an SFTP session if destination directory doesn't contain any files create a full backup first if destination does contain some files copy only missing files Looking forward for any ideas, thanks!
WinSCP - SFTP script to backup the newest files
0 I think this plugin is buggy when it comes to include and exclude regex. If you exclude .*xml it shouldn't backup any files ending in xml but some files (not all) are still backed up. Share Improve this answer Follow answered Dec 21, 2020 at 5:04 CalvinCalvin 63222 gold badges88 silver badges1919 bronze badges Add a comment  | 
I want to backup all .log* files under $JENKINS_HOME/logs via thinBackups Backup additional files Feature. Folder structure is as follows: /var/lib/jenkins/logs/ |-- health-checker.log |-- slaves | |-- Slave\ 1 | | `-- slave.log | |-- Slave\ 2 | | `-- slave.log | `-- Slave\ 3 | `-- slave.log `-- tasks |-- Connection\ Activity\ monitoring\ to\ agents.log |-- Connection\ Activity\ monitoring\ to\ agents.log.1 |-- Download\ metadata.log |-- Download\ metadata.log.1 |-- Fingerprint\ cleanup.log |-- Fingerprint\ cleanup.log.1 |-- Periodic\ background\ build\ discarder.log |-- Periodic\ background\ build\ discarder.log.1 |-- Workspace\ clean-up.log |-- Workspace\ clean-up.log.1 |-- telemetry\ collection.log |-- telemetry\ collection.log.1 What I have now is the following regex: ^(logs|tasks|.*\.log.*) which catches the top level health-checker.log and also all logs below tasks. But how do I extend this regex to also include all logs from all slaves (Slave 1, Slave 2 and Slave 3)? I tried the following regex ^(logs|tasks|.*\.log.*)^(logs|slaves|Slave\ 1|.*\.log.*) which does not work. I also cannot find any further explanation for the regex formatting thinBackup is using. P.S. Our security departement requires backup of all logs for the last 90 days.
Backup Jenkins log folder via ThinBackup
You can use the --where option for mysqldump to take a subset of rows. For example: mysqldump --where "updated_at > '2020-06-14 00:00:00'" ...other options... However, this will add that WHERE condition to each table it dumps. So you must have an updated_at column in every table. MySQL does not require such a column to exist, so this is up to you and your table design conventions. Another method is to use the binary log. You can dump the contents of the binary log as a series of SQL statements that reproduce the incremental changes. mysqlbinlog --start-datetime "2020-06-14 00:00:00" ...other options... See https://dev.mysql.com/doc/refman/8.0/en/mysqlbinlog.html for full documentation on using this command.
I have database of size 500mb, to take backup it is taking huge time to take full dump file, I just want to know is there any way we can take backup queries for date between from and to date-times, so that I can take back in particular time interval. I know we can take whole db backup using `$ > mysqldump -u username -p db_name > path/file.sql` Thanks in advance :) any suggestions accepted :)
MySql take backup of recently updated/inserted data
0 I managed to figure it out. You might need to edit the date formats. echo off set CUR_YYYY=%date:~0,4% set CUR_MM=%date:~5,2% set CUR_DD=%date:~8,2% set CUR_HH=%time:~0,2% if %CUR_HH% lss 10 (set CUR_HH=0%time:~1,1%) set CUR_NN=%time:~3,2% set CUR_SS=%time:~6,2% set CUR_MS=%time:~9,2% set SUBFILENAME=%CUR_YYYY%.%CUR_MM%.%CUR_DD%_%CUR_HH%.%CUR_NN% mkdir C:\Users\redfi\OneDrive\Minecraftbackup\%SUBFILENAME% robocopy "%appdata%\.minecraft\saves" "C:\Users\redfi\OneDrive\Minecraftbackup\%SUBFILENAME%" /e /xf Share Improve this answer Follow answered May 27, 2020 at 11:14 Levente VargaLevente Varga 1111 bronze badge Add a comment  | 
I have to seperate codes. This creates a directory with only the date, but i don't know how to put the time on the end. for /f "tokens=1* delims=" %%a in ('date /T') do set datestr=%%a mkdir c:\%date:/=% And I have this to copy the files: robocopy "%appdata%\saves" "C:\Users\redfi\OneDrive\Savesbackup" /e /xf They both work individually, but I want to put them in one batch. I want it to create the directory with the current date and time, and then copy the saves in it. So I can restore older saves if I want. Thank you!
How to create a batch that creates a directory named the current date and time, and then copy files in it?
I saw that the Sanity.io project has a webhook that gets triggered when changes occur (under 'Settings' tab --> 'API' sub-tab). I guess this could be set to call a service that gets all documents and save their ID with the current timestamp.
Question #1: is it possible to restore deleted items from backup in Sanity.io? To my understanding, restoring a backup is done by exporting all documents from a dataset's history, and importing it. Restore - there's one way to do it: https://www.sanity.io/docs/importing-data. Export - there're two ways to export data: Export all currently-existing-data: https://www.sanity.io/docs/export. Export one historical document by its ID: https://www.sanity.io/docs/history-api. IDs of deleted items do not appear in currently-existing-data (because they are deleted, duh), and without them, I can't get historical documents. Also, there's a Gotcha section saying: Gotcha Current Access Control means if you're able to access the document today, you'll be able to access all the previous revisions of the document. Question #2: if restoring deleted items from backup is NOT possible due to those missing document IDs - is there a way to automatically save all document IDs (either every hour or whenever a change occurs)? I guess that if there's a mechanism that also saves the last time an ID was seen, you can also know more or less its deletion time...
Sanity.io backup and restore (or auto-saving doc IDs)
0 I'm not exactly sure why, but the cause of the issues was the latest version of Sysinternals' Sysmon, version 11.0. This also caused another issue where Office documents would take long to open or save from and to network locations, even when the client had a gigabit connection to the server. Rolling back to Sysmon version 10.42 resolved all of the issues. Share Improve this answer Follow answered May 15, 2020 at 15:03 imort3rnalimort3rnal 1122 bronze badges Add a comment  | 
We are running Microsoft Azure Backup Server (MABS) v3 on Windows Server 2019. Since Monday evening the cloud and bare-metal backups started failing, but other on-premises backups are running fine. Looking through Event Viewer we found an application error which might assist in resolving the issue, but I haven't been able to find much info online: Faulting application name: cbengine.exe, version: 2.0.9177.0, time stamp: 0x5e677965 Faulting module name: LKRhDPM.DLL, version: 2.0.9177.0, time stamp: 0x5e6778b2 Exception code: 0xc0000005 Fault offset: 0x0000000000005e3f Faulting process id: 0x1984 Faulting application start time: 0x01d62470ae5f7af9 Faulting application path: C:\Program Files\Microsoft Azure Backup Server\DPM\MARS\Microsoft Azure Recovery Services Agent\Microsoft Azure Recovery Services Agent\bin\cbengine.exe Faulting module path: C:\Program Files\Microsoft Azure Backup Server\DPM\MARS\Microsoft Azure Recovery Services Agent\Microsoft Azure Recovery Services Agent\bin\LKRhDPM.DLL Report Id: b57ddf2b-4948-4f68-b572-e58f9031db9b Faulting package full name: Faulting package-relative application ID: The application is on the most recent version and I also updated and repaired all the Visuall C++ components as well as .NET frameworks. The OS also has the March updates installed. Please let me know if you need more details.
Azure Backups Failing Due To Possible MARS Agent DLL Issue
Yes, it's possible. The easiest way is that you could using Export to backup the database to blob storage outside of the resource group the databases is located in On portal. For example: Export: Export database: Choose the blob storage out of the resource group which the databases is located in: For more details, please reference: Export an Azure SQL database to a BACPAC file. Hope this helps.
I'm trying to figure out if there is a way to backup Azure SQL Databases (Not SQL on Azure VMs) to a service vault or a blob storage outside of the resource group the databases is located in. So far I have not found any resources on the topic. Can anyone confirm that this is not possible?
Backup of Azure SQL Database to Recovery Services vault
All snapshots to a single repository are incremental. You will not have to do anything specifically to take differential snapshots. If you just take snapshots to the same repository, each snapshot will try to reuse as much data as possible from prior snapshots automatically. For more details: https://discuss.elastic.co/t/feasible-solution-of-snapshot-for-30-gb-and-increasing-every-day-data/217568/3
I want to know that how can I take differential snapshots in ElasticSearch and also how it works? We received around 30 GB of data monthly in all indices of ElasticSearch. Few indices get update daily and few indices data get purge after certain retention days. So I was thinking to go with incremental snapshot so that it will not take time and only modified data will get into a snapshot. But I don't know how it works and will it be feasible for my case? Could you please help me to design a snapshot process so that it can work permanently and will not be impacted with time. ElasticSearch Version: 6.2.4
How to perform differential or incremental snapshot and how it works?
0 A log backup that leaves the target database in RESTORING is called a "Tail Log Backup". And taking the database offline requires terminating all other connections to the database. Like this: use testdb go alter database testdb set single_user with rollback immediate go use master BACKUP LOG [TESTDB] TO DISK = N'c:\temp\TESTDB.bak' WITH NO_TRUNCATE , NOFORMAT, NOINIT, NAME = N'TESTDB-Full Database Backup', SKIP, NOREWIND, NOUNLOAD, NORECOVERY , STATS = 10, CHECKSUM GO Share Improve this answer Follow answered Jan 16, 2020 at 13:46 David Browne - MicrosoftDavid Browne - Microsoft 83.9k77 gold badges4646 silver badges7474 bronze badges 1 Thanks I followed the MS Guide which doesn't mention that. I did try that but without the "rollback immediate". – Marathon_Nick Jan 16, 2020 at 14:55 Add a comment  | 
I am attempting a Log Shipping failover test and the step which is intended to put the database into restoring mode is failing with the error "Exclusive access could not be obtained because the database is in use". The action was carried out through SSMS selecting "Transaction Log" and also selecting "backup the tail of the log" under Media Options which should leave the database in "restoring mode". After the failure I attempted to put the database into single user mode first and also take it offline but both commands didn't work (or fail). I have repeated the action against a test database and that worked with no problem. The T-SQL is below: BACKUP LOG [TESTDB] TO DISK = N'U:\MSSQL\Backup\TESTDB.bak' WITH NO_TRUNCATE , NOFORMAT, NOINIT, NAME = N'TESTDB-Full Database Backup', SKIP, NOREWIND, NOUNLOAD, NORECOVERY , STATS = 10, CHECKSUM GO declare @backupSetId as int select @backupSetId = position from msdb..backupset where database_name=N'TESTDB' and backup_set_id=(select max(backup_set_id) from msdb..backupset where database_name=N'TESTDB' ) if @backupSetId is null begin raiserror(N'Verify failed. Backup information for database ''TESTDB'' not found.', 16, 1) end RESTORE VERIFYONLY FROM DISK = N'U:\MSSQL\Backup\TESTDB.bak' WITH FILE = @backupSetId, NOUNLOAD, NOREWIND GO I also checked for any blocking or running transactions but nothing showed up. Any ideas anyone?
Transaction Log Backup fails with "Exclusive access could not be obtained because the database is in use"
0 Checkout the below link. https://cloud.google.com/sql/docs/postgres/import-export/exporting It describes how you can use the gcloud command or rest API to export your database in SQL dump or CSV format and save it to Cloud Storage. Internally, it uses pg_dump, which is the same thing the django-dbbackup extension also uses. Share Improve this answer Follow answered Jan 14, 2020 at 9:46 dishant makwanadishant makwana 1,03977 silver badges1414 bronze badges Add a comment  | 
I have a Django app running in standard Google App Engine, using postgres SQL. I am using Google Cloud SQL (postgres) with Google App Engine. How do I backup database everyday at 2am, and save the .sql file in Google Storage Bucket? I want to run a daily cron to save db snapshot in one of the buckets. Using django-extensions and django-dbbackup, I have created a cron job, which works locally. But on GAE I get this error: dbbackup.db.exceptions.CommandConnectorError: Error running: pg_dump db-development --host=67.74.73.21 --port=5432 --username=db-development-user --no-password --clean How do I set the psql password or use .pgpass file in GAE? Code: # cron.yaml cron: - description: "DB Backup CRON job" url: /core/cron-jobs # the path to your view schedule: every 2 minutes # the frequency for running the job retry_parameters: min_backoff_seconds: 120 max_doublings: 5 # views.py def my_background_job(request): call_command('runjobs', 'daily') return HttpResponse('Cron run success', status="200") # myapp>jobs>daily>db_backup.py (example documented in django_extensions) from django_extensions.management.jobs import DailyJob class Job(DailyJob): help = "Django Daily DB Backups" def execute(self): from django.core import management management.call_command("dbbackup") # settings.py INSTALLED_APPS = [ ... 'django_extensions', 'dbbackup', # django-dbbackup ] Postgres backup documention in https://cloud.google.com/sql/docs/postgres/backup-recovery/backing-up is vague. What Google terms as "automated backup", isnt much helpful for me, since I dont get the DB Dump in a *.psql file anywhere. They do mention "Currently, you can only use the API to set custom locations for backups." but its only for the geographic address. How do I do automated daily backups of my postgres DB (as *.psql dump files in Google storage bucket) for my Django app running on google app engine?
How to do automated postgres database backups in google app engine for django app?
0 This error is printed when the actual binary file is missing from the Artifactory storage location (by default, $ARTIFACTORY_HOME/data/filestore/). You shouldn't be able to download the same artifact (repo1.maven.org-cache/org/apache/httpcomponents/project/7/project-7.pom) as well ad the actual content is missing. This however shouldn't fail the backup of the repository/system, it just an error message indicates that the content of this artifact does not exist. Share Improve this answer Follow answered Dec 24, 2019 at 7:50 ShayShay 32633 silver badges55 bronze badges 0 Add a comment  | 
When trying to backup or export the JFrog Artifactory, the backup folder is created. But the System log shows multiple errors like: 2019-12-23 17:31:07,026 [art-exec-5] [ERROR] (o.a.r.d.i.DbExportBase:123) - Failed to export '/data/backups/20191223.172751.tmp/repositories/repo1.maven.org-cache/org/apache/httpcomponents/project/7/project-7.pom' due to:Binary provider has no content for 'c486760d8e0eafe8d4932450e386c2805364f782': Binary provider has no content for 'c486760d8e0eafe8d4932450e386c2805364f782'
Artifactory Failed to export , Binary provider has no content
0 Here is a non-exhaustive list of problems that could compromise the restoration of data: Human error (probably the most common mistake) Wrong backup configuration Wrong business data target Backup software failure breaking change between version internal error at restoration silent backup failure Hardware dependency minimum space requirement minimum compute power (ram speed and memory, cpu frequency, cpu core, gpu,...) specific hardware requirement (gpu nvidia, cpu intel, cpu architecture, sound card) input/output interface requirement (mdi, usb, DVD/CD, optical, jack 3.5, Gigabyte ethernet,...) Network dependency snapshots media available the medium is not accessible the medium is not readable (media failure) authentication & authorization fail recover endpoint is not available Operating system dependency os type (linux, darwin, windows,...) os version (ubuntu 16.06 LTS, debian buster, windows home,...) minimum security patch, kernel version platform architecture (32/64bit) specific driver requirement (nvidia drivier, controller SATA,...) specific system application (hyper-v, nvidia CUDA, etc...) timezone, region user & group permission, file & folder permission firewall rules (port, protocol,...) Application dependency dependency with other application & data application version incompatibility with other application application license status (expiration, validity, number available,...) application availability (can I download it now ?) Share Improve this answer Follow edited Jun 28, 2020 at 15:37 answered Jun 28, 2020 at 15:31 Sylvain MullsSylvain Mulls 5111 silver badge44 bronze badges Add a comment  | 
There is a lot of research about why backup fail but I have trouble to find good resource about why backup recovery fail. Have you ever experienced recovery fail ? What are subtle issue you didn't anticipate (environment change, storage unavailability, permission, media malfunctions,...) ?
What are some subtle problems that can cause the restoration of a backup to fail?
0 save following text as a backup_arch.sh #!/bin/bash yearXXXX=${1:-2019} monthXX=${2:-09} dayXX=${3:-25} hourXX=${4:-07} destination=backup/$yearXXXX/$monthXX/$dayXX/$hourXX/ mkdir -p $destination scp /StoneSoft/StoneGate/data/storage/Firewall/$yearXXXX/$monthXX/$dayXX/$hourXX/*.arch $destination then chmod a+x backup_arch.sh then run it ./backup_arch.sh 2019 09 25 08 arguments are year month day hour change destination= to the proper one. Share Improve this answer Follow answered Mar 11, 2020 at 13:49 pl_profpl_prof 5333 bronze badges Add a comment  | 
I´m going to try explain my requirement for this backup. I supose it´s so easy but for me not because i don´t use to work with linux. Scenario. With a server linux via SCP i want to get from my firewall the daily logs. The firewall storage it´s logs in /StoneSoft/StoneGate/data/storage/Firewall/year2019/month09/day25/hour07/file_with_date.arch I run a scp and i can copy withing problem. What I need is to program a bin sh script to copy daily the folder distinguishing the variables on the year, month, day and hour. yearXXXX monthXX dayXX hourXX It´s this possible? Regards
scp backup a path with variables folders
According to your screenshot, there is a Warning mark followed Reporting Key Step. If your deployment uses reporting, you will be prompted for a password in order to back up the encryption key for reporting. The backups will be encrypted using the key for which you are required to provide a password that is at least 7 characters long and contains one uppercase letter, one lowercase letter and one number. This maybe the root cause why the verify button is disabled and not able to move on next step. You must make sure the Confirm Password and Encryption Key Password is the same. And the password is strong enough to follow the requirements. In other words, you must make sure the warning mark disappear in Reporting Key step. Hope this helps. Official tutorial of Configure a backup schedule and plan for your reference.
We have setup of TFS2013, earlier backup was configure on this server some how backup location does not exist. I would like to change the backup location. I followed below steps: a) Started TFS Admin Console. b) Clicked on Reconfigure Scheduled Backup c) Modified path and followed steps of wizard. d) I am stuck on below screen, Verify button is disabled and not able to move on next step: Please suggest what am I missing here?
Not able to configure TFS backup on TFS2013 environment
seem you can if your purpose it's to restore quickly mysql installation but i don't know exactly how much feasible could be in terms of space compared to simples automatic procedure for backup . ( you have a lot of alternative , for example with crontab you can see this crontab )
As mentioned in the subject, my question is whether I can simply copy the mysql database folder in /var/lib/mysql/ to backup the database? Would this cause issues anyhow? Obviously the idea is to add the backup folder back into the server in case of data loss. Thanks in advance. Regards.
Can I simply copy the mysql database folder in /var/lib/mysql/ to backup the database?
0 If you started your container using Volumes, try looking at C:\ProgramData\docker\volume for your backup. The backup is normally located at: /var/opt/gitlab/backups within the container. So hopefully you mapped /var/opt/gitlab to either a volume or a bind mount. Share Improve this answer Follow answered Sep 5, 2019 at 18:21 trpltrpl 3166 bronze badges 5 I tried to locate this path but I am unable to find it in my windows. – deanavenger Sep 6, 2019 at 9:13 Can you give the docker run command or the docker-compose.yml you used to start the container? – trpl Sep 6, 2019 at 9:15 I have now added the compose file and the docker run command in the question please read. – deanavenger Sep 6, 2019 at 9:29 First, try docker exec -it <container_name> bash and ls /var/opt/gitlab/backups. Is there a tar file? – trpl Sep 6, 2019 at 9:33 I have tried this but got this ls: cannot access '/var/opt/gitlab/backups': No such file or directory. – deanavenger Sep 6, 2019 at 9:40 Add a comment  | 
I have a docker desktop installed on my windows pc. In that, I have self-hosted gitlab on one docker container. Today I tried to back up my gitlab by typing the following command: docker exec -t <my-container-name> gitlab-backup create After running this command the backup was successful and saw a message that backup is done. I then restarted my docker desktop and I waited for the container to start when the container started I accessed the gitlab interface but I saw a new gitlab instance. I then type the following command to restore my backup: docker exec -it <my-container-name> gitlab-backup restore But saw the message that: No backups found in /var/opt/gitlab/backups Please make sure that file name ends with _gitlab_backup.tar What can be the reason am I doing it the wrong way because I saw these commands on gitlab official website. I have this in the docker-compose.yml file: version: "3.6" services: web: image: 'gitlab/gitlab-ce' container_name: 'gitlab' restart: always hostname: 'localhost' environment: GITLAB_OMNIBUS_CONFIG: | external_url 'http://localhost:9090' gitlab_rails['gitlab_shell_ssh_port'] = 2224 networks: - gitlab-network ports: - '80:80' - '443:443' - '9090:9090' - '2224:22' volumes: - '/srv/gitlab/config:/etc/gitlab' - '/srv/gitlab/logs:/var/log/gitlab' - '/srv/gitlab/data:/var/opt/gitlab' networks: gitlab-network: name: gitlab-network I used this command to run the container: docker-compose up --build --abort-on-container-exit
Gitlab-CI backup lost by restarting Docker desktop
0 Yes, Valero can be used to get the only the required object from k8s. I strongly suggest checking this video. Share Improve this answer Follow answered Aug 15, 2019 at 18:13 FL3SHFL3SH 3,18811 gold badge1818 silver badges2525 bronze badges Add a comment  | 
I am implementing a snapshot solution for my K8s cluster. I already have a way of getting a consistent snapshot of all the services (persistent volumes) that are running. So I don't need to snapshot persistent volumes for that matter. But I am looking for a way to get all K8s config at the time of taking the snapshot. What would be the best way to get all K8s config details including all yamls of services, configmaps, secrets? I read about Velero, but Velero is more of a disaster recovery solution. I would like to take a snapshot of the cluster when the cluster is still running. Can Velero be used to just get the above mentioned config from K8s. The interesting part is that this snapshot solution that I have is again going to be a service running on Kubernetes. So that means, I am looking for a solution that works from within a pod. Any help is greatly appeciated
How to get all Kubernetes Objects details
As Larnu said, you can Call the Differential Backups (SQL Server). Since you have full backup the database to you Azure Blob Storage with the satatements: BACKUP DATABASE [TestDB] TO URL = 'https://cloudspacestorage.blob.core.windows.net/backups/Testdb.bak' WITH CREDENTIAL = 'Backupcredential', STATS = 10 GO And from the SQL Server Backup to URL, the Bcakup to URL support you create the Differential Backups . So you can do an differential backup to Azure Blob like this: BACKUP DATABASE [TestDB] TO URL = 'https://cloudspacestorage.blob.core.windows.net/backups/Testdb.bak' WITH DIFFERENAL Hope this helps.
I have a v large dB 250 gb and I have backed up to azure blob using the below... BACKUP DATABASE [TestDB] TO URL = 'https://cloudspacestorage.blob.core.windows.net/backups/Testdb.bak' WITH CREDENTIAL = 'Backupcredential', STATS = 10 GO I now need to do it again is there a way I can do an differential backup e.g. only the changes since last backup Thanks
Sql backup to url - azure blob differential
Use a date, hour / minute in the name of your archive so it wont replace your old backup. day = $(date +%F %l:%M)" Adding all the fields (day, date, time and year) will help you to store all backups without overwriting any.
I am new to Linux. I am using CentOS 7. I found out that my new backup always replace my old backups. For example, backup on 15th of July 2019 will replace backup on 14th of July 2019. # Create archive filename. #day=$(date +%A) day=$(date -d "$D" '+%d') hostname=$(hostname -s) archive_file="$hostname-$day.tgz" Could you point out what am I doing wrong with this command? Or could there possibly be another reason that my backups would replace my old one that I did not see? Any help would be appreciated.
New backups overwriting old backups in Linux Crontab
create or replace DIRECTORY paris_dir as '/u04/backup' valid only full path. Oracle does not verify directory when create.
I had been asked to create a table: '/u03/oracle/table/prac_tab.dbf' and import it to '/u04/backup' Now, my problem begins when i do the following code: [oracle@haranda ~]$ expdp paris dumpfile=parisbk1.dmp logfile=parisbk1.log full=y directory=paris_dir I had tried doing it in the /u03/oracle and other places, but i always get the same result: ORA-39002: operaci�n no v�lida ORA-39070: No se ha podido abrir el archivo log. ORA-29283: operaci�n de archivo no v�lida ORA-06512: en "SYS.UTL_FILE", l�nea 536 ORA-29283: operaci�n de archivo no v�lida in English would be something like: (not a valid operation, couldn't open the file log, not valid operation of file, in "sys.util_file" lane 536, not valid operation of file) My other problem is that i managed to make it in my first test, but I am not able to do so now, meaning I could do it only by mistake. Also, I think i understand the basics, but there is a lot i don't understand so far, if you are able to explain it I would appreciate it. I will leave the code I had been using to create, grant privileges and create the directory. create user paris identified by paris; create tablespace practica_tab datafile '/u03/oracle/table/practica_tab1.dbf' size 150m autoextend on next 10m maxsize unlimited; create tablespace practica_idx datafile '/u04/oracle/index/practica_idx1.dbf' size 150m autoextend on next 10m maxsize unlimited; alter user paris quota unlimited on practica_tab; alter user paris quota unlimited on practica_idx; alter user paris account unlock; grant resource to paris; grant connect to paris; grant imp_full_database to paris; grant exp_full_database to paris; GRANT UNLIMITED TABLESPACE TO paris; alter user paris default tablespace practica_tab; grant datapump_exp_full_database to paris; grant datapump_imp_full_database to paris; create DIRECTORY paris_dir as 'u04/backup'; grant read, write on directory paris_dir to paris;
can't export using expdp, can't open log and more
See: https://docs.unity3d.com/Manual/UnityCollaborateRollback.html And: https://www.youtube.com/watch?v=hBDflmDFAxI These should give you all the info you need. I highly advise you to back up an export on a USB drive.
I had been working on a scene for hours when mysteriously my computer completely crashes and needs to restart I remember having saved several times when working so I wasn't scared. As I thought I had lost only maybe just 1-5min of work, I re-open Unity and to my surprise, my scene seemed to have reverted to one of my first saves, and because I had re-opened the scene, the Temp/_BackupScenes folder was erased... So I was wondering, is there any way to recover Unity Scenes that does not imply the _BackupScenes folder? I don't think there is, but just to be sure because I very much remember saving the scene before it crashes... Thank you in advance,
Unity - Recover lost scene
0 If you've mounted a volume from your host to the nexus container you should be able to point your task configuration to the mounted volume from within the container. See this link for more details: https://help.sonatype.com/repomanager3/backup-and-restore/configure-and-run-the-backup-task#app Share Improve this answer Follow answered Jan 30, 2019 at 5:27 hoomanbhoomanb 1611 bronze badge Add a comment  | 
I'm setting up Nexus as artifact manager for maven projects. I am using Nexus 3 in a docker container and I am trying to set a weekly backup as a task in the configuration area of Nexus. Unfortunately I cant find anything in the Nexus documentation about the options on how to set up the filesystem location for backup data. For example on the host filesystem or a different server. Can somebody help me please. A solution or an advice? Thanks in advance
Nexus in docker container how to configer another server as backup location in the tasks?
0 To exclude all excepted zhrc file (put in your .gitignore file): /* !.zhrc Share Improve this answer Follow answered Dec 11, 2018 at 14:24 Nicolas VoronNicolas Voron 2,95611 gold badge2121 silver badges3636 bronze badges 2 unfortunately, this does not work. When I execute: cd ~/subdirectory/ && git status I get: nothing added to commit but untracked files present. The expected output was: fatal: Not a git repository (or any parent up to mount point /) – Lukasz Ochmanski Dec 12, 2018 at 13:39 True. You will not version files/folders excepted the zhrc one (which is under the hood the expected behaviour), but infortunately there is no way to prevent the described behaviour (as far as i know), since the git command will search the nearest .git folder in the folder path and display "fatal: Not a git repository" if fails – Nicolas Voron Dec 12, 2018 at 14:51 Add a comment  | 
I would like to create a backup of the .zshrc file in my ~/ directory and avoid versioning all other items. How to exclude all subdirectories and their content? For example: cd ~/subdirectory/ && git status should output: fatal: Not a git repository (or any parent up to mount point /) How to achieve this?
How to add ~/.zshrc file into git?
0 // start the backup process Artisan::call('backup:run'); $output = Artisan::output(); // log the results Log::info("Backpack new backup started from admin interface \r\n" . $output); Share Improve this answer Follow answered Feb 3, 2020 at 22:32 sufyansufyan 1933 bronze badges Add a comment  | 
how to get DB backup via controller code using artisan command. I am using https://github.com/schickling/laravel-backup this package to backup/restore database. That package works fine in the terminal with this command php artisan db:back --database=mysql but when i try to execute via the controller code, it doesn't work and this is my code snipped- try{ $result= Artisan::call('db:backup',['--database'=>'mysql']);//this is command if($result){ return Redirect::back()->with('success','Database backup was successful, .SQL file was saved in dump folder.'); }else{ return Redirect::back()->with('error','Error to back up database.'); } I also try with this code but doesn't work Artisan::call('db:backup',['--database'=>'mysql']); Artisan::call('db:backup'); Please, anyone, help me out exactly where is the problem?
How to get database backup using artisan command via controller code?
You can't backup Postgresql use query as stated in comment by @jmcilhinney and the link to refer your question it's clear. In Postgresql Documentation there are no information about that too: Backup Use pg_dump or Backup But using pg_dump will give you a full sql file including all DDL and DML statements to recreate your database in another place (or restore). And Big system too like Odoo or iDempiere as ERP use database query0 do backup too using query1 command. query2 : How To Backup Odoo query3 : How To Backup iDempiere Just accept it you can't do that from query4 statement and make meeting your project about this..
I have a problem. I want to ask you all about how to back up database from vb.net. I'm using PostgreSQL. Why I need back up database? Because it's too large to display all at datagridview when it's running. So for a time span, example 3 months, it's automatically backed-up. I always get Messagebox "Error at or near Backup. Error while executingnonquer Thank you koneksi() Try Dim fname As String Dim db2 As String = "aplikasilis2" Dim strQuery As String Dim objdlg As New SaveFileDialog objdlg.FileName = "C : \Documents\Arsip 3 bulan_" + FormatTglUniversal(Date.Now) + ".bak" objdlg.ShowDialog() fname = objdlg.FileName Dim data_affector As Integer strQuery = "Backup database =" & db2 & " To disk ='" & fname & "'" Try cmdBackup = New OdbcCommand(strQuery, conn) data_affector = cmdBackup.ExecuteNonQuery MsgBox("berhasil") Catch ex As Exception MsgBox(ex.Message) End Try Catch ex As Exception MsgBox(ex.Message) End Try
How to back up PostgreSQL using vb.net
0 Try to use SSMS 17, You can get it from provided URL: Download SQL Server Management Studio 17.9.1 Thank You Share Improve this answer Follow edited Dec 19, 2018 at 5:05 Ali Jamal 1,46111 gold badge1414 silver badges2020 bronze badges answered Oct 16, 2018 at 20:03 MerMer 11911 silver badge33 bronze badges 2 1 The correct way to approach "there's a character limit" is to put useful content in there. Not to leave snark. All in all, not a great way to make a "first impression" here on SO. – Damien_The_Unbeliever Oct 23, 2018 at 12:44 @Mer Thank You Mer Yes using SQL Management Studio 17.9 Did Solve Problem Thank You <3 – Ali Jamal Dec 20, 2018 at 5:14 Add a comment  | 
I can't edit "Back Up Database Task", please help me. Sorry, I don't speak English very well. The task with the name "Back Up Database Task" and the creation name "Microsoft.SqlServer.Management.DatabaseMaintenance.DbMaintenanceBackupTask, Microsoft.SqlServer.MaintenancePlanTasks, Version=15.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91" is not registered for use on this computer. Contact Information: Back Up Database Task []
The task with the name "Back Up Database Task" ... is not registered for use on this computer
0 You'll want to use the REST API for Google Drive. There are official client libraries for several languages, listed here Authenticate your account via OAuth2. Depending on the client library you use, there are different tools to do this. I'm most familiar with the Python SDK, and I use Google's oauth2client. The run_flow() command is a simple way to get an OAuth2 refresh token that you can then use to authenticate API calls. Here's the full documentation for authenticating to Google Drive via OAuth2. Once you're authenticated, you can call the files list endpoint. By default, this will list all files in your my drive. You can limit the search to just those files so you don't have to iterate through all your files each time using a search query. If you have more backups than can fit on a single page (it doesn't seem like it, esp. with the max pageSize of 1000), you will have to paginate your calls. You can then filter your results by either the filename (as you indicated) or by the createdTime parameter in files.list in your code. Make sure to include createdTime in your fields, by setting the fields parameter to a comma-separated list of parameters, ie "files(id,createdTime,name,mimeType)" or, simply, "*" to get every field. Get a list of all files older than 7 days, then call files.delete. You can then run this script on a cron job every night, however you want to deploy it. Alternatively, you could use the unofficial Drive command line tool, which will take care of a lot of this for you. Share Improve this answer Follow answered Sep 16, 2018 at 22:22 alexwennerbergalexwennerberg 12111 silver badge55 bronze badges Add a comment  | 
I'm using Google drive to store my daily made backup from my linux machine. But i need a script that auto delete's the files in a specific folder after 7 days. So that there are 7 backups in the folder. The file that get's backup is called world-$(date +%d-%m-%Y).tar.gz it replaces the %d %m and %Y with the day month and year it created the backup. so let's say it created one today it would be called world-14-09-2018.tar.gz it get's stored inside a folder called backups Is there any way to have it auto delete the files so not store the inside the trash but delete's them completely after 7 days. I'm not really familair with those kind of scripts. So if anyone could help me that would be really awesome.
Google Drive, auto delete files older than 7 days in specific folder
0 You can just copy the files on JENKINS_HOME; A far better approach would be using thinBackup Using thinBackup you can restore and make a backup. Share Improve this answer Follow answered Aug 28, 2018 at 10:43 handlerFivehandlerFive 8691010 silver badges2323 bronze badges Add a comment  | 
We have a Centos machine on which Jenkins is hosted. This Jenkins have lots of jobs and configurations of interconnected jobs. The problem is that if anytime we want to do a change in multiple jobs there is a risk of misconfiguration. So there should be a revert process so that we can revert back to older working version of Jenkins. Just like git does, if code is buggy I have the option to revert back to the healthy code. Is there any standard solution available for these type of problems?
Git like change and version management of complete machine
0 You can run the svn checkout command to checkout a working copy of your project and schedule to svn update to bring the working copy up to date with the repository's state. Alternatively, you can run svn export. You can use Windows Task Scheduler to automatically run your script every night. Share Improve this answer Follow edited Feb 8, 2019 at 21:24 answered Feb 8, 2019 at 12:51 bahrepbahrep 30.2k1212 gold badges104104 silver badges152152 bronze badges Add a comment  | 
Let me start by saying we already backup the repository on the file system, I also create a dump of the repository which is also backed up. Just tested hotcopy and all it does is create a copy of the repository, which we already backup. I've been requested to create an automated export of working copies for all projects (ideally using a .bat file run via a scheduler) so, for example, that it can be performed every night out of hours. I cannot seem to be able to find details of how to automate this process using a tool such as svnadmin (their documentation isn't the best). This is in case everything goes kaput, I can quickly start coding on the text files at a moments notice without having to faff with setting up an svn server, reloading the repository etc.
Automatically export a working copy from SVN
0 You cannot do that without editing the dump from 9.6 before loading it. The correct way to upgrade a database with dump/restore is to use pg_dump or pg_dumpall from the v10 installation to perform the dump. That is the supported way. Share Improve this answer Follow answered Jun 18, 2018 at 19:39 Laurenz AlbeLaurenz Albe 225k1818 gold badges234234 silver badges303303 bronze badges 2 Sorry, but what do you mean "editing the dump from 9.6 before loading it" ? When I will create the backup file, at the dumpOption1 and 2, I don't see anything about sequences. – Eliel de Oliveira França Jun 18, 2018 at 20:28 Well, you create a backup by running pg_dump. You will have to use the pg_dump from PostgreSQL v10 to create a dump that can be loaded into v10. If you cannot do that, you'll have to edit the dump (your backup) until it loads into PostgreSQL v10. If it is a custom format dump, use pg_restore to convert it to a plain format dump. – Laurenz Albe Jun 18, 2018 at 21:17 Add a comment  | 
I'm trying to migrate my PostgreSQL 9.6.3 to 10.4. The restore seems to work correctly on the data, but i'm losing the sequences. I'm having erros like: "Column 'min_value' does not exists", "Column not found in pgSet: last_values"... It seems that PG 10 has a different way of working with sequences. My question is: Is there a way to restore a backp from PG 9 to 10 without losing data from the sequences? Thank you.
Sequence error postgres 9 to 10
0 Actually you can use only *.pdf and I assume it will already check all subfolder and process .pdf files. Share Improve this answer Follow answered Oct 17, 2018 at 18:47 Ahmet İnalAhmet İnal 6344 bronze badges Add a comment  | 
I installed the DropIt product to copy PDF files from the selected folder and its subfolders. I defined association : rule: **;*.pdf which means any subfolder and just PDF files). action: copy destination folder : c:\mybackup\%SubDir%\ I drag and drop a selected folder and drop it into DropIt's icon. Actually no file matches
DropIt how to copy pdf files from folder and subfolders
0 You can't use shutil.copy on android. Because "Error Operation not permitted"(Test on Galaxy A8 2018) I had a similar problem today with android kivy buildozer, I haven't had time to solve it, but I think this solution can be used: with open("filename", "rb") as f: data = f.read() with open("new filename", "wb") as f: f.write(data) This is a image Update, Shutil.copy has copied successfully, but he still throws an error: "[Errno 1] Operation not permitted:", maybe you can use try,except to ignore it. Other things you need to do is "Rescan the android mediastore" if the file is a media and you will want to see it at media browser. "Refresh MediaStore on Android Kivy Example" Share Improve this answer Follow edited Aug 31, 2022 at 18:04 General Grievance 4,7153434 gold badges3434 silver badges4848 bronze badges answered Jul 3, 2022 at 6:03 謝咏辰謝咏辰 3755 bronze badges Add a comment  | 
I have been trying to make an android app (via kivy launcher) that automatically makes a backup each time you save to a .db file. When running kivy on my pc, the program works perfectly, but when I use it on my phone via kivy launcher, the program just crashes. Interestingly, the next time I go into the kivy launcher, the backup file appears in the directory it was supposed to be saved in. I copy the .db file using shutil: shutil.copy('test.db','BACKUP_'+self.time+'.db') here is the python code here is the kivy log file The last few errors suggest there is some problem with the shutil.copy() method I am using to copy the files, but I really don't understand why it is giving me the error. I think it might have to do with android having issue with the directory I want to save the backup to, or maybe some permission issues. I am using shutil because it comes with the default python 3 library as far as I understand. I am also using the android kivy launcher because I haven't yet learned how to export a .apk file (I heard you need to use buildozer on linux or mac and I run windows). I would appreciate it if anyone could give me advice on how I can copy the .db file as a backup on android using kivy launcher.
How to make a backup/copy of .db file on android using python and kivy launcher
You have to convert your date variable to a string so you can concatenate it: SET @CreateDynamicSQL='CREATE TABLE [dbo].[paul_AccountContact_Backup_' + convert(varchar(20), @SYSDATETIME, <Format>) + ']( Documentation for the format parameter: https://learn.microsoft.com/en-us/sql/t-sql/functions/cast-and-convert-transact-sql?view=sql-server-2017
I am trying to write a stored procedure to backup a table, but I keep getting: Msg 402, Level 16, State 1, Line 9 The data types varchar and datetime2 are incompatible in the add operator. Msg 402, Level 16, State 1, Line 15 The data types varchar and datetime2 are incompatible in the add operator. How can I fix this? SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO DECLARE @CreateDynamicSQL nvarchar(1000); DECLARE @CopyDynamicSQL nvarchar(1000); SET @CreateDynamicSQL='CREATE TABLE [dbo].[paul_AccountContact_Backup_'+@SYSDATETIME+']( [AccountID] [int] NOT NULL, [ContactID] [int] NOT NULL ) ON [PRIMARY] GO' SET @CopyDynamicSQL='select * into [dbo].[paul_AccountContact_Backup_'+@SYSDATETIME+'] from paul_AccountContacts' EXEC(@CreateDynamicSQL); EXEC(@CopyDynamicSQL);
Dynamic Variable Name for backup table
There are actually two ways to use Amazon Glacier: As an Amazon S3 storage class (as you describe), or By interacting with Amazon Glacier directly Amazon Glacier has its own API that you can use to upload/download objects to/from a Glacier vault (which is the equivalent to an S3 bucket). In fact, when you use Amazon S3 to move data into Glacier, S3 is simply calling the standard Glacier API to send the data to Glacier. The difference is that S3 is managing the vault for you, so you own't see the objects listed in your Glacier console. So, what you might choose to do is: Create your WHM backups Send them directly to Glacier Versioning An alternative approach is to use Amazon S3 Versioning. This means that objects delete from Amazon S3 are not actually deleted. Rather, a delete marker hides the object, but the object is still accessible. You could then define a lifecycle policy to delete non-current versions (including deleted objects) after a period of time. See (old article): Amazon S3 Lifecycle Management for Versioned Objects | AWS News Blog
Is it possible to move or copy files from s3 to glacier (or if not possible another cheaper storage class) although the original s3 files will be deleted? Looking for a robust solution for server backups from whm > s3 > glacier. I've trialled multiple lifecycle rules, and can see several questions have been asked around this here, but I can't seem to get the settings right. WHM sends backups to s3 fine for me. It works by essentially creating a mirror of the on-server backups on s3. My problem is that the way the whm/s3 integration works means that when the on-server backups are deleted at the end of the month so are the backups in the s3 bucket. What I'd like to achieve is that before the files are deleted from s3 they're permanently kept for a specified period, say 6 months. I've tried rules to archive them to glacier without success and think this is because the original files are deleted and so are the glacier instances? Is what I'm trying to achieve possible? Thanks.
Is it possible to keep files in glacier after deletion from s3?
I just brute forced it by going down the list, adding smaller and smaller files until I was out of files or the disk filled up, then repeat. By "mathematically required" I just meant size of all files / 25GB = ideal # of discs. I can post the resulting arrays on
I have around 4,000 files of wildly different sizes that I am trying to back up as efficiently as is reasonably possible. I know compressing them all into a giant tarball and splitting evenly is a solution, but as I am using Bluray discs, if I scratch one section, I risk losing the whole disc's contents. I wrote a python script to put all the files (coupled with their sizes) into an array. I take the biggest file first, and either add the next biggest (if that total is still less than 25GB) or move down the list until there is one I can add that will, until I hit the size limit, then start over with the next biggest remaining file. This works reasonably well, but it gets really ragged at the end and I will end up using 15 more discs than is mathematically theoretically required. Anyone have a better method I'm not aware of? (This seems like a Google coding interview question lol). I don't need it to be perfect, I just want to make sure I'm not doing this stupidly before I run through this giant stack of non-cheap BD-Rs. I've included by code for reference. #!/usr/bin/env python3 import os import sys # Max size per disc pmax = 25000000000 # Walk dir walkdir = os.path.realpath(sys.argv[1]) flist = [] for root, directories, filenames in os.walk( walkdir ): for filename in filenames: f = os.path.join(root,filename) fsize = os.path.getsize(f) flist.append((fsize,f)) flist.sort() flist.reverse() running_total = 0 running_list = [] groups = [] while flist : for pair in flist : if running_total + pair[0] < pmax : running_list.append(pair[1]) running_total = running_total + pair[0] flist.remove(pair) groups.append(l) running_list = [] running_total = 0 print('This will take {} discs.'.format(len(groups)))
Programmatically packing 2TB of various sized files into folders of 25GB? (I used python, any language will be acceptable)
0 you could do this task with CLI, eg: -bash-4.2$ pg_dump -s $(psql -c "select string_agg('-t '||relname,' ') from pg_class where relkind='v' and relnamespace='public'::regnamespace" -At) | grep -i create CREATE VIEW avva AS CREATE VIEW v AS Of course without grep to get the full definition. otherwise you have to repeat for every view or create in schema backup, DumpOptions #1 choose Only Schema... Share Improve this answer Follow edited Jun 20, 2020 at 9:12 CommunityBot 111 silver badge answered Feb 14, 2018 at 10:06 Vao TsunVao Tsun 48.9k1313 gold badges107107 silver badges140140 bronze badges Add a comment  | 
I want to have backup of my database, but using pgadmin III I can only restore tables, but I want my views be restored as well. Is there any way doing that? tnx
postgreSQL backup tables and views
0 First you have to activate the wal reception: barman receive-wal --create-slot main-server Then, in function of the version of barman you are using, the following can solve your problem: barman version < 2.2 barman switch-xlog --force --archive main-server barman version > 2.1 barman switch-wal --force --archive main-server Share Improve this answer Follow answered Oct 17, 2018 at 13:36 jean pierre huartjean pierre huart 39011 silver badge1212 bronze badges 1 This is only true if you are using streaming – Peter Nunn Feb 1, 2021 at 10:30 Add a comment  | 
I have begun looking into using barman to perform my database backups however I have come across the following error: barman backup main-server This command gives the following result: ERROR: Impossible to start the backup. Check the log for more details, or run 'barman check main-server' When I then run: barman check main-server I get the following: Server main-server: WAL archive: FAILED (please make sure WAL shipping is setup) PostgreSQL: OK is_superuser: OK wal_level: OK directories: OK retention policy settings: OK backup maximum age: FAILED (interval provided: 1 day, latest backup age: No available backups) compression settings: OK failed backups: FAILED (there are 4 failed backups) minimum redundancy requirements: OK (have 0 backups, expected at least 0) ssh: OK (PostgreSQL server) not in recovery: OK archive_mode: OK archive_command: OK continuous archiving: FAILED archiver errors: OK Any help would be greatly appreciated EDIT: Log info from calling barman backup main-server: barman.wal_achriver INFO: No xlog segments found from file archival for main-server.
Barman backup: Backup failed issuing start backup command
0 Easy way to find where you PG data located: run: ps aux | grep postgres | grep -- -D or ps ax | grep postgres | grep -v postgres than zip this folder(for example /var/postgres/9.5/data i have no idea where is on your server) run: sudo zip -r ~/9_5_postgres.zip /var/postgres/9.5/data download it on local machine scp server-user-name@ip-address:~/9_5_postgres.zip ~/ unzip it unzip ~/9_5_postgres.zip probably folder unzipped in ~/var/postgres/9.5/data user folder and then run postgres server(stop postgres before and use stop/start/restart in the end): pg_ctl -D ~/var/postgres/9.5/data -l ~/var/postgres/9.5/pg.log start Use in your config/database.yml same password login as in production and chine on prod if you commit you config/database.yml in git Share Improve this answer Follow answered Jan 21, 2018 at 14:48 r3char3cha 11111 silver badge66 bronze badges 0 Add a comment  | 
I need to backup postegresql database of rails project deployed on EC2 using Capistrano. So how can i do that and also i want to save backup data on my local computer
How to backup postgresql database of rails project deploy on EC2
0 Grant root access to deja-dup. sudo chown root:root /usr/bin/deja-dup sudo chmod u+s /usr/bin/deja-dup Give permission to it. sudo chmod 777 /usr/bin/deja-dup Please be aware that doing this exposes your system to risk, and it is strongly advised that you change the permissions back to their default settings as soon as you are finished using the backup program. Share Improve this answer Follow answered Apr 23, 2023 at 1:13 solankybasantsolankybasant 111 bronze badge Add a comment  | 
When trying to setup automated backup under Ubuntu 17.10 using Deja-Dup I realized that one can not backup the root directory since a normal user starting the deja-dup application does not have all rights to access all files in /. (german discussion about rather similar situation can be found here: https://forum.ubuntuusers.de/topic/wie-sichert-an-mit-deja-dup-ein-systemverzeich/) The usual workaround to gksu the deja-dup application does no longer work on Ubuntu 17.10. It seams that a decision has been made to prevent users from starting graphical applications as root on purpose for it is often a bad/risky think to do. However to create regular backups of a Systems / directory with deja-dup the application has to be configured and later on started as root. Since the typical ideas like gksu, gksudo, sudo -H do not work unter Ubuntu 17.10 I would highly appreciate any advice on a secure practice to get to run deja-dup as root. Can someone help with advice?
How to run grafical tool (e.g. deja-dup) as root on Ubuntu 17.10
0 Not sure why your '-notlike' isn't working but I'm sure it has something to do with the way your String gets evaluated, it might be that it's an array of Strings instead of one complete String(depending on how you populate it). As an interim solution while I try it on my machine or at least as a test could you maybe try nesting 2 if statements? Something like this perhaps: if($LogType -like "*file list*") { if ($LogType -like "*Database*") { ... } else { ... } } Share Improve this answer Follow answered Jan 15, 2018 at 11:37 Daefect91Daefect91 12888 bronze badges 2 Seems to work that way - strange that the -notlike wasn't working though – Tom Halverson Jan 15, 2018 at 12:24 I'm glad you got it working though, if you don't mind, please up-vote the answers that were useful to you. Happy scripting! – Daefect91 Jan 16, 2018 at 6:28 Add a comment  | 
I am building a script to monitor backups with Ahsay in PowerShell and have run into an issue with identifying backup types, it works by checking for certain keywords in the first few lines of the log file, and I want to have it so that if it contains the word "File List" but also contains the word "Database" it is not logged as a Files type backup - code below (the variable $LogType contains the text that is being searched): get-childitem -Exclude *Scheduler*, *SystemTray*, *Archived*,*System*| select -last 1 | Get-Content -totalcount 13 -outvariable LogType if($LogType -notlike "*Database*" -and $LogType -like "*file list*") { ... } This returns as true and executes the if statement when the text it is searching is: [2018/01/14 19:30:06] [info] Start [ Windows Server 2008 R2 (FS1), AhsayOBM 6.27.0.0 ] [2018/01/14 19:30:10] [info] Using Temporary Directory C:\Users\systemadmin\.temp [2018/01/14 19:30:10] [info] Start running pre-commands [2018/01/14 19:30:10] [info] Finished running pre-commands [2018/01/14 19:30:11] [info] Start creating Shadow Copy Set ... [2018/01/14 19:30:21] [info] Shadow Copy Set successfully created Database [2018/01/14 19:30:21] [info] Downloading server file list ... [2018/01/14 19:36:44] [info] Downloading server file list ... Completed [2018/01/14 19:36:45] [info] Reading backup source from hard disk ... I am having trouble understanding why this is the case, can anyone help?
Issue with searching for strings in text in Powershell
I had a problem with FTP FEAT and certificate. I disabled FEAT in lftp config file and I specified full path to the certificate. It works now.
I would like to ask you for a help. I'm a scripting noob and I need to make a script for backing up our webserver via FTPS. I made this script, but it only makes the backup file, but it does not upload that file to the FTPS server. But when I run that lftp command alone, it works. I'm looking at this for quite a long time, but I can't find out, why it's not working... Could someone help please? Thank you! #!/bin/bash # SETTINGS RMDATE=$(date --iso -d '10 days ago').tar FTPUSER=user FTPPW=pass FTPSERVER=my.server.com LFTP=/usr/bin/lftp # DELETE OLD BACKUPS rmold () { $LFTP << EOF open ${FTPUSER}:${FTPPW}@${FTPSERVER} rm -rf ${RMDATE} bye EOF echo "Done." } # PLESK BACKUP if /usr/local/psa/bin/pleskbackup server -v --exclude-logs >/tmp/backup-plesk.log 2>&1 --output-file=/var/www/bak/`date -I`.tar; then if lftp -c "open ${FTPUSER}:${FTPPW}@${FTPSERVER}; put /var/www/bak/`date -I`.tar"; then rm -f /var/www/bak/`date -I`.tar /usr/bin/sendEmail <<< some parameters >>> # backup success message rmold else /usr/bin/sendEmail <<< some parameters >>> # upload error message exit 1 fi else /usr/bin/sendEmail <<< some parameters >>> # backup error message exit 1 fi
bash backup script - pleskbackup + lftp
0 pg_dumpall will try to dump the entire cluster (all databases that the engine serves). If you need this functionality, you probably only want to dump your lucz database (I'm assuming). Try using the -d option to specify the database to dump. For help: pg_dumpall --help or https://www.postgresql.org/docs/9.4/static/app-pg-dumpall.html Share Improve this answer Follow answered Aug 24, 2017 at 17:23 HAPHAP 7922 bronze badges Add a comment  | 
postgresql 9.4 following this cheat sheet http://www.postgresonline.com/special_feature.php?sf_name=postgresql90_pg_dumprestore_cheatsheet&outputformat=html trying to backup here is my command C:\Program Files\PostgreSQL\9.4\bin>pg_dumpall -h arcserver -U postgres -w -p 5433 -f "R:\Data\LUCZ_2017\PostgreSQL_Backup\lucz.sql" then it gives me this strange error pg_dumpall: could not connect to database "template1": fe_sendauth: no password supplied what is template1? as you can see in the picture there is no template1? I put in the -w so it doesnt ask me for the password.. what am I doing wrong here?
postgresql pg_dumpall password error
I got it working by putting content into its own intent-filter group along with a file data scheme, both having a mime-type of application/octet-stream. <intent-filter android:priority="999" > <action android:name="android.intent.action.VIEW" /> <category android:name="android.intent.category.DEFAULT" /> <category android:name="android.intent.category.BROWSABLE" /> <category android:name="android.intent.category.OPENABLE" /> <data android:scheme="http" android:host="*" android:pathPattern=".*\\.sbu" /> <data android:scheme="https" android:host="*" android:pathPattern=".*\\.sbu" /> <data android:scheme="ftp" android:host="*" android:pathPattern=".*\\.sbu" /> <data android:scheme="ftps" android:host="*" android:pathPattern=".*\\.sbu" /> <data android:scheme="file" android:host="*" android:pathPattern=".*\\.sbu" /> </intent-filter> <intent-filter android:priority="999" > <action android:name="android.intent.action.VIEW" /> <category android:name="android.intent.category.DEFAULT" /> <category android:name="android.intent.category.BROWSABLE" /> <category android:name="android.intent.category.OPENABLE" /> <data android:scheme="file" android:mimeType="application/octet-stream" android:pathPattern=".*\\.sbu" /> <data android:scheme="content" android:mimeType="application/octet-stream" android:pathPattern=".*\\.sbu" /> </intent-filter>
I have an app where I am allowing users to backup data and want them to be able to click on the back up file via a File Manager, GMail, and the Downloads system app. I have defined the following intent in my manifest file... <intent-filter android:label="Simple Backup File" android:priority="999" > <action android:name="android.intent.action.VIEW" /> <category android:name="android.intent.category.DEFAULT" /> <category android:name="android.intent.category.BROWSABLE" /> <category android:name="android.intent.category.OPENABLE" /> <data android:scheme="http" android:host="*" android:pathPattern=".*\\.sbu" /> <data android:scheme="https" android:host="*" android:pathPattern=".*\\.sbu" /> <data android:scheme="ftp" android:host="*" android:pathPattern=".*\\.sbu" /> <data android:scheme="ftps" android:host="*" android:pathPattern=".*\\.sbu" /> <data android:scheme="content" android:host="*" android:pathPattern=".*\\.sbu" /> <data android:scheme="file" android:host="*" android:pathPattern=".*\\.sbu" /> </intent-filter> The above works, if I click on the .sbu file from a file manager, but not from GMail or list of Downloads. I did read that I need a mimeType to get the content scheme working, but when I define a mimeType as either */* or application/octet-stream, the functionality even stops working from within a File Manager. What am I doing incorrectly? Do I need to set any settings when writing the file for the first time? How best would you handle my situation.
Android: File Extension Intent Filter does not work properly with GMail/Downloads app
0 Funny enough, the user has to be granted the right to execute this system function in the database (s)he connects to. Although this is a system function identical in every database and doing only things at the cluster level, the right to execute it is saved separately in every single database. So, if you want to have a special user executing exclusively this function (plus pg_stop_backup(boolean), of course), it would be best to grant it in the postgres database and allow the user to connect only to this database. Share Improve this answer Follow answered Jul 28, 2017 at 22:28 Holger JakobsHolger Jakobs 1,00433 gold badges1212 silver badges3232 bronze badges Add a comment  | 
ERROR: type should be string, got "\nhttps://www.postgresql.org/docs/9.6/static/functions-admin.html#FUNCTIONS-ADMIN-BACKUP-TABLE and https://www.postgresql.org/docs/9.6/static/continuous-archiving.html state that the function pg_start_backup(text, boolean, boolean) can be executed by a superuser or a user which has been granted execute right on the function.\nIn practice, this doesn't work, because the function is \"security invoker\". It would only work if it was \"security definer\". Page Create PostgreSQL 9 role with login (user) just to execute functions says \"functions must have been created with SECURITY DEFINER or this user will still be unable to execute them\", which is perfectly correct.\nWhat are the minimal rights I have to give a user on top of \"execute on function pg_start_backup\" in order to enable him to really execute the function?\n"
Granting execute on function pg_start_backup in PostgreSQL does not work
No. Unfortunately. Vote for the feature here: DCR - Attach database with NORECOVERY
Is there a way to attach a SQL Server mdf/ldf files such that the database is in Restoring mode and we can restore log backups on top of it. I have a hardware array snapshot (crash consistent) that contains the mdf/ldf files. I need to attach these files to another SQL Server instance and then do log restores for a point in time recovery(using stopat) The CREATE DATABASE .. FOR ATTACH command brings the database online. Log restore cannot be done on an online database. Is there a way to accomplish this?
Attach database with NORECOVERY
0 I reconfigured the backup setup and now it is working fine. I don't know why this error came before. I used the following command to run the backup: $ backup perform --trigger my_backup The steps that I taken to get it work: I am using rbenv, I created .ruby-gemset and .ruby-version in a new folder and installed backup $ gem install backup It installed all the dependencies of backup. Generated the model with my mongodb $ backup generate:model --trigger my_backup --storages='s3' --compressor='gzip' --notifiers='mail' --databases="mongodb" And run the backup perform command, it works! $ backup perform --trigger my_backup :) Share Improve this answer Follow edited Jun 1, 2017 at 7:26 answered Jun 1, 2017 at 7:16 AbhiAbhi 3,55122 gold badges3535 silver badges3939 bronze badges Add a comment  | 
I am testing the backup gem and the size of the db is very small. I have mongodb with me. My configuration is: Backup::Model.new(:backup_db, 'Backup for my db') do split_into_chunks_of 250 database MongoDB do |db| db.name = "my_dev" # db.username = "" # db.password = "" db.host = "localhost" db.port = 27017 db.ipv6 = false # db.only_collections = ["only", "these", "collections"] db.additional_options = [] db.lock = false db.oplog = false end store_with S3 do |s3| s3.access_key_id = "my key" s3.secret_access_key = "my key" s3.region = "my region" s3.bucket = "https://region-2.amazonaws.com/bucket-name" s3.keep = 10 s3.max_retries = 3 s3.retry_waitsec = 5 s3.chunk_size = 5 # MiB end sync_with Cloud::S3 do |s3| s3.access_key_id = "my key" s3.secret_access_key = "my key" s3.bucket = "https://region-2.amazonaws.com/bucket-name" s3.region = "my-region" # s3.path = "" s3.mirror = true s3.concurrency_type = false s3.concurrency_level = 2 s3.directories do |directory| end end compress_with Gzip end When running backup perform, I am getting the following error: [2017/05/30 14:23:28][error] ModelError: Backup for Backup Gauge Lrs db (backup_db) Failed! [2017/05/30 14:23:28][error] An Error occured which has caused this Backup to abort before completion. [2017/05/30 14:23:28][error] Reason: Excon::Errors::SocketError [2017/05/30 14:23:28][error] break from proc-closure (LocalJumpError) [2017/05/30 14:23:28][error] [2017/05/30 14:23:28][error] Backtrace: Anyone explain me about this error? I run the command with the log file option. Same log I am getting in the file too. I am using Rails 4.2 and Ruby 2.2.1 Backup Gem version is 3.4.0
Ruby backup Gem s3 upload not working
To create an archive, you need to save the workbook when opening it, not at the end when the user is closing it. The code below takes care of this: Private Sub Auto_Open() Dim WoExt, Ext, BkPath, nDateTime As String On Error GoTo ErrorHandler: 'defining variables nDateTime = Format(Now, "YYMMDD") Ext = ".xls" WoExt = ThisWorkbook.Name BkPath = "C:\Users\xxx.xxx\Desktop\vbatest\Backup\" Application.DisplayAlerts = False ActiveWorkbook.SaveCopyAs (BkPath + WoExt + " - Backup - " + nDateTime + Ext) Application.DisplayAlerts = True Exit Sub ErrorHandler: MsgBox "Backup has not been saved." End Sub This just saves the backup and leaves saving the edits to the user at the end.
I wrote a code to run when user saves workbook, this code saves another copy, and then saves the original file again, to avoid leaving user editing the "backup" workbook. Once it saves the original file again, the "after_save" trigger is fired, and keeps saving to infinity. I checked for a solution here on StackOverFlow but haven't found one. Private Sub Workbook_AfterSave(ByVal Success As Boolean) Call SaveToLocations End Sub Sub SaveToLocations() Dim WoExt, Ext, BkPath, nDateTime As String Static OrigName As String 'defining variables nDateTime = Format(Now, "YYMMDD") OrigName = "C:\Users\xxx.xxx\Desktop\vbatest\test orig.xlsm" Ext = ".xls" WoExt = "test orig" BkPath = "C:\Users\xxx.xxx\Desktop\vbatest\Backup\" Application.DisplayAlerts = False ActiveWorkbook.SaveAs (BkPath + WoExt + " - Backup - " + nDateTime + Ext) ActiveWorkbook.SaveAs OrigName Application.DisplayAlerts = True End Sub
Creating a backup of a file with copy after_save, goes into endless loop
There is no blob backup facility. You'll need to make your own backups (e.g. making copies of blobs, either to the same storage account or a different one). You can take snapshots, but as @Gaurav points out in comments, snapshots are tied to the original blob, so if you delete the original, you delete the snapshots. I answered a similar question regarding backups and Table Storage, as well, here.
I wonder is there's an inbuilt way in azure to backup a blob account, or just a container if that can't be done. Looked into azure backup service but can't find the option for doing it, just options to backup VM. Alternatively I can write my custom back up strategy, but not sure if it's the case that I can't find that option inbuilt. Thanks,
Azure blob storage backup
0 I would recommend that you merge your backups and primaries into a single consolidated entry - and also, to speed up subsequent requests, that you look into keepalives. This would prevent additional TCP handshake overhead. upstream backend { server server1; server server2; server server3 backup; server server4 backup; keepalive 25; } server { listen 127.0.0.1:80; location / { proxy_set_header Connection ""; proxy_http_version 1.1; proxy_pass http://backend; proxy_next_upstream invalid_header http_500 http_502 http_504 http_403; } } In the above configuration, the first two servers will be tried in a round-robin configuration, then the backup servers will be tried. I would also explore max_fails and fail_timeout which can be appended to the upstream servers for a way of dealing with failures in the upstream list. Proxy_next_upstream can also be configured to move to the next upstream upon the "timeout" condition. More about keepalives in NGINX documentation Share Improve this answer Follow edited May 23, 2021 at 19:48 answered May 23, 2021 at 15:40 Patrick Scott BestPatrick Scott Best 15333 silver badges1212 bronze badges Add a comment  | 
I have an NGINX proxy with 4 upstream servers behind it, 2 local, 2 remote, this is to maintain the maximum failure tolerance possible. Now I can make it try each server in order with something like this upstream backend { server server1; server 127.0.0.1:8081 backup; } upstream fallback1 { server server2; server 127.0.0.1:8082 backup; } upstream fallback2 { server server3; server server4 backup; } server { listen 127.0.0.1:80; location / { proxy_pass http://backend; proxy_next_upstream invalid_header http_500 http_502 http_504 http_403; } } server { listen 127.0.0.1:8081; location / { proxy_pass http://fallback1; proxy_next_upstream invalid_header http_500 http_502 http_504 http_403; } } server { listen 127.0.0.1:8082; location / { proxy_pass http://fallback2; proxy_next_upstream invalid_header http_500 http_502 http_504 http_403; } } Is there a better way of achieving this? As to get to the remote servers I'm looping back through the servers for a third time which feels wrong. I want nginx to avoid using the remote servers if the local ones have the data as this would incur a cost. I also don't mind if 100% of requests went to one server if the server was responding. Thank you in advance for any help!
NGINX Upstream servers in order
0 OK I excluded the unnecessary directories in subdirs=($(find * -maxdepth 0 ! -path /path/to/exclude -type d)) point. So the directories are being excluded during the step before duplicity process. Thank you. Share Improve this answer Follow edited May 10, 2017 at 18:31 answered May 10, 2017 at 18:17 Mher HarutyunyanMher Harutyunyan 18333 silver badges1212 bronze badges Add a comment  | 
#!/bin/bash datetime="`date +%Y%m%d`"; export AWS_ACCESS_KEY_ID="MYKEY" export AWS_SECRET_ACCESS_KEY="MYSECRET" export BACKUP_DEST_FILES="s3://s3.eu-central-1.amazonaws.com/mybucket" cd /var/www/ dirs=($(find * -maxdepth 0 -type d)) for dir in "${dirs[@]}"; do cd $dir subdirs=($(find * -maxdepth 0 -type d)) for subdir in "${subdirs[@]}"; do duplicity full --exclude "**logs/**" --exclude "**backups/**" --no-encryption $subdir $BACKUP_DEST_FILES/$dir/$datetime/$subdir done cd ../ done This code supposed to backup every directtory and subdirectory under /var/www/ except for "logs" and "backups" dirs. While it works perfectly with the rsync command below: rsync -ar --exclude='backup' --exclude='log' --exclude='logs' --exclude='backups' $subdir backups/$datetime/ ..it doesn't work with duplicity command below. It just backs up everything and doesn't exclude. duplicity full --exclude "**logs/**" --exclude "**backups/**" --no-encryption $subdir $BACKUP_DEST_FILES/$dir/$datetime/$subdir What am I missing here?
duplicity --exclude option doesn't exclude the mentioned dir
Given your goal to make an archive, presumably preserving owner, file modes, file flags and ACLs, if available, then this should do what you need: #!/bin/bash archive_name="Me.$(date +%d_%b_%Y-%k:%M:%S).tar.xz" $(cd Compression_Play/ tar -cvp 2017-03-23_01-13-02.avi | pixz -9t > $archive_name ) Based on the GitHub page for tarball compress/decompress for pixz, you will need to reverse the pipe to get your data out: pixz -x9T < $archive_name | tar xvpf -
I am attempting to make a script that will make a backup file of a video file in the same directory with the time stamp at the end of the tar file. The script is for demonstration purposes only, that is why I do not intend on sending the file to a different directory. Below is how far I have come with it. #!/bin/bash cd Compression_Play/ echo Me.$(date +%d_%b_%Y-%k:%M:%S).tar.xz tar -I "pixz -9t" -cvf Me.$(date +%d_%b_%Y-%k:%M:%S).tar.xz 2017-03-23_01-13-02.avi My problem is whenever I try to execute the script it gives me this: Me.29_Mar_2017-22:03:49.tar.xz tar: -9: (PROGRAM ERROR) Option should have been recognized!? Try 'tar --help' or 'tar --usage' for more information. As best as I can tell the problem is with the quotes in my tar command. Is there a way to make the script so I can keep the quotes or substitute them?
How to use quotes in bash scrips when a command in the script needs quotes too?
0 Run gpg --version and /usr/bin/gpg --version and check whether they are the same. Duplicity might fall back to version 1.x.x, whereas your terminal might have an alias to invoke GnuPG version 2.x.x. In that case the key is created/imported with GnuPG 2, but GnuPG 1 might not know about it(?) Alternatively, if you would like Duplicity to use GnuPG 2 and you are on debian (or related), you can divert /usr/bin/gpg2 to /usr/bin/gpg as described here or here. In that case duplicity will be forced to use version 2. As noted in the reference, diverting might have undesirable side-effects on other programs expecting GnuPG version 1 when they call /usr/bin/gpg. Share Improve this answer Follow edited Jul 20, 2018 at 19:55 answered Jul 17, 2018 at 21:12 NielsNiels 17677 bronze badges Add a comment  | 
I am trying to create an ansible role to automate backups. However, it fails with the error: Local and Remote metadata are synchronized, no sync needed. Last full backup date: none GPGError: GPG Failed, see log below: ===== Begin GnuPG log ===== gpg: ADD3F11Easdsdfs: skipped: public key not found gpg: [stdin]: encryption failed: public key not found ===== End GnuPG log ===== The PGP key was generated using gpg --gen-key <filename> with these settings: Key-Type: DSA Key-Length: 4096 Name-Real: {{ gpg_name }} Name-Comment: Used primarily for backup encryption on {{ inventory_hostname }} Name-Email: {{ gpg_email }} Expire-Date: 0 %no-ask-passphrase %no-protection %commit %echo done As you can see, it has no expiry date (so it cannot have expired) and no passphrase. Both properties have been manually verified using the CLI. This is the command I am using to run duplicity: duplicity full /var/www gs://backups2/{{ inventory_hostname }} --encrypt-key {{ gpg_email }} I have also tried using the key ID: duplicity full /var/www gs://backups2/hostname --encrypt-key ADD3F11E Any idea what could be going wrong?
Duplicity backups with PGP fail: "Unusable public key"
No, mysql does not provide any dpecific api to back a database up. Mysql does provide the mysqldump utility that can be called from the command line to back up a database and also provides a general C api to access a mysql server. The heneral api can be used to write your own backup program if you are unhappy with mysqldump.
Closed. This question needs to be more focused. It is not currently accepting answers. Want to improve this question? Update the question so it focuses on one problem only by editing this post. Closed 7 years ago. Improve this question I am looking for a way to backup MySql database use a C program. Does MySql provide API for backup?
C program to consume MySQL API for backup? [closed]
0 pgAgent stores data default in schema pgagent in database postgres. So you can dump and restore it. Share Improve this answer Follow answered Feb 24, 2017 at 17:31 Aleksandra AngielskaAleksandra Angielska 1 Add a comment  | 
I have create a job via pgadmin in postgres. This is a test system. How can I export this job to a database on other computer (productive system)? Greets Benjamin
Export a job in postgres to an other database (productive system)
0 You can create: an archive of the working tree: zip -r myrepo.zip myrepo -x *.git* a bundle of the .git repo (as explained here): cd myrepo git bundle create /tmp/repositoryname.bundle --all That would give you two files (easier to copy over any location you want), with the minimum size. But if your repo is big, you might want to create an incremental bundle, with only the last few commits in it (provided the target location where you might use that bundle has already the rest of the history) Share Improve this answer Follow edited May 23, 2017 at 11:45 CommunityBot 111 silver badge answered Jan 31, 2017 at 5:33 VonCVonC 1.3m539539 gold badges4.6k4.6k silver badges5.4k5.4k bronze badges Add a comment  | 
I want to backup a Git versioned project, including the working directory changes, and a pointer to the remote repository. The backed-up folder should contain the working directory files, and needed .git files in order to allow the user to navigate to the project and run git pull (or another git command) safely. It doesn't matter if the remote directory needs to be downloaded again and check "integrity" with the working directory. I can't backup the whole .git folder, as it contains very large files. I tried these commands: git --aggressive git reflog expire --expire=now --all git repack -ad git prune But they still leave large (>200 MB) *.pack files in .git/objects/pack folder. I tried removing the objects folder but that makes the repository invalid: fatal: Not a git repository (or any of the parent directories): .git. I tried git init after removing git --aggressive0 folder, but that gives the error git --aggressive1, and I can't do any other command. I tried removing all files and folders inside git --aggressive2, except for git --aggressive3, and it's still not a valid git repository. I tried git --aggressive4 after removing those folders, but then the changes on the working directory become "untracked", even though they are the same files on the remote repository, so I'm forced to run git --aggressive5 and download the repository again. Is there a way to create a bare bones Git repository, with minimum pack files, while maintaining the "tracked" status of the working directory?
Bare bones Git repository with tracked files
0 You can stop a running backup by using the kill YourProcessIDHere command. To identify the process to kill use sp_who command and look in the cmd column for backup command. More information can be found here: MSDN Kill Share Improve this answer Follow edited Dec 14, 2016 at 13:51 answered Dec 14, 2016 at 13:33 NeoNeo 3,34977 gold badges3535 silver badges4444 bronze badges 2 I haven't session ID. User clicks on button "Stop backup" and I must stop creation of .back file – Kirill Dec 14, 2016 at 13:51 @Kirill see my updated answer. You are looking for a spid ( commonly known as sql process id ). with the SPID obtained by using sp_who you can kill the spid executing the backup command. – Neo Dec 14, 2016 at 13:52 Add a comment  | 
I back up database in .bak file: BACKUP DATABASE MyDatabase TO DISK='E:\MyDatabase.bak' How can I stop this process?
Stop creation of .bak file
1 In case anyone is lookig for a solution I found the answer in the link below, but be careful because in my case it erased all my existing conenctions but the default one, besides that it works perfectly: SQL command to stop job in pgAdmin 4 Share Improve this answer Follow edited May 23, 2017 at 12:17 CommunityBot 111 silver badge answered Dec 1, 2016 at 22:25 EdgarEdgar 1133 bronze badges 1 This would have been better to leave as a comment and have others mark this as duplicate. – silentsod Dec 1, 2016 at 23:46 Add a comment  | 
few days ago I tried to restore a Postgres backup using pgAdmin 4.1. The restore process failed, so I did it again outside the tool, via command line. Now the problem is that a small blue window is still displaying at the right-down corner of my pgAdmin, remembering me that restoring failed with exit code -1, and there is no way to dismiss it, even closing and restarting program, processes, windows, neither the "reset layout" command could help me... Does someone know how to kill that popup window? Thank you!
pgAdmin 4.1:Restoring backup failed but window's always open
0 The risk of not stopping the server is that there's the risk that not all the changes in, for example, the Database, are written to disk correctly. Therefore, you could end up in an unstable state. If you really want to try, you can follow the exact procedure without the /opt/bitnami/ctlscript.sh stop part. The thing is, when restoring the server, you may have to execute /opt/bitnami/ctlscript.sh restart. In any case, I do not recommend you to try it. Share Improve this answer Follow answered Oct 5, 2016 at 14:44 Javier SalmeronJavier Salmeron 8,52122 gold badges2828 silver badges2323 bronze badges 3 the problem is that our data is almost 70 GB (increasing always) and it is taking almost 3 hours to take the backup. So, our server is stopped for almost 3 hours. – pratik Oct 6, 2016 at 9:22 I suppose you can also try to do a mysqldump and save it to a file. I think that the rest of the process could be moved without stopping. – Javier Salmeron Oct 6, 2016 at 12:24 Can you please explain this in more detail. – pratik Oct 6, 2016 at 12:45 Add a comment  | 
Currently we are doing the process of backup and restore as explained below: https://docs.bitnami.com/google/apps/gitlab/#how-to-create-a-full-backup-of-gitlab This requires stopping the server, specially for taking the backup. Is it possible to do backup and restore without stopping the server? How can I do this?
Bitname Gitlab backup and restore without stopping server
0 It's not possible to change after the initial configuration. If you want to change the configuration you have to rebuild the Backup vault and register the servers. I had the same issue at a customer in the past. Share Improve this answer Follow edited Sep 14, 2016 at 17:53 David Makogon 70.1k2121 gold badges144144 silver badges191191 bronze badges answered Sep 14, 2016 at 15:55 Ivo HaagenIvo Haagen 11155 bronze badges 1 This isn't the place to promote your blog and redirect Q&A to. – David Makogon Sep 14, 2016 at 17:54 Add a comment  | 
I'm using DPM to azure online backup and I want to change the redundancy from Geo to Local, but it's greyed out. Is there a way to change it? I don't want to create a new fault just for that, then I have to re-upload everything from DPM to azure again.
Azure backup configuration - Change from Geo to Local (greyed out)
Back-uping a git repository is as simple as either: git clone --bare it to a new place. It allows to git push/pull for incremental backup. tar -czf backup.tar.gz myrepo/.git/ file and copy it somewhere. So you can manage it in a simple file, easily. It's easy.
I am working on backing some git repos as part of a new back up plan. It seems git bundle is the way to go, but I am wondering, and in my, in all honesty, short google searches, I cannot seem to find out if I can do a bundle directly into a specific directory. For my SVN I mounted a cifs share, and pointed the dump directly to that share without having to script a basic thing to create and then move. Let me know, thank you.
Using git bundle create to backup repos to a specific directory?
0 I made this bash script replace.sh to make it easy for later reuse #!/bin/bash old=$1 new=$2 grep -r -l "$old" --exclude='*.{sh,old}' * | while read -r line ; do cp "$line" "$line.old" echo "Replacing '$old' with '$new' in file '$line'" sed -i "s/$old/$new/g" "$ done make it executable $chmod +x replace.sh Note: if using Cygwin for Windows you may need to use dos2unix: dos2unix replace.sh Then run it with two strings as arguments, the string to find and the one that will replace is ./replace.sh foo bar This makes a backup (.old) of all files (excluding .old and .sh) containing the word 'foo', then replaces it with 'bar' Any useful tips would be much appreciated! Share Improve this answer Follow answered Jul 15, 2016 at 9:26 LaughingManLaughingMan 65011 gold badge99 silver badges1818 bronze badges 2 it should be noted that this script runs from the directory it is placed in and all subdirectories – LaughingMan Jul 15, 2016 at 9:27 similar question on unix stackexchange: unix.stackexchange.com/a/295696/109046 – Sundeep Jul 15, 2016 at 13:37 Add a comment  | 
I want to find and replace the string "foo" with "bra" in several files, traversing through directories (Linux machine). If a string is replaced in file "example.txt" I need that file to be copied to "example.text.old" before string replacement. I can replace the string is files recursively like this: find . -type f -name '*' -exec sed -i 's/test1/test2/g' {} + and it works fine, but no backup for when something doesnt work. Alternatively, I have stumbled upon this perl script that works, albeit I am more comfortable using native unix commands. # perl -e "s/old_string/new_string/g;" -pi.save $(find DirectoryName -type f) This, however, this backs up ALL files, which is not what I want.
Find and replace string in files recursively, create backup for affected files
0 It will not work. This is very sad that Google force two things that don't work together. Keystore on Android doesn't have a backup option like Keychain on iOS. Your data will be backed up but after restoration, you will not have an option to decrypt it. You should use a different way to backup data - the easiest one is to have user's account and store his data on your backend. I wrote about it more here: https://medium.com/@thecodeside/android-auto-backup-keystore-encryption-broken-heart-love-story-8277c8b10505 Share Improve this answer Follow answered Sep 21, 2020 at 10:24 Artur LatoszewskiArtur Latoszewski 74511 gold badge1111 silver badges2121 bronze badges Add a comment  | 
When you encrypt the app's data stored on the device, it is recommended to use the KeyStore to generate and save the key Material used for the encryption. If the user wants to backup the app's internal storage, he can use adb backup or Google's Cloud Backup. That's what I have understood. But when the data is encrypted by keys stored in the Android's KeyStore, is it possible to restore the backup the user/Google made? Or does the the use of encryption prevent the backup function?
Can the encryption Keys stored in the Android's KeyStore be backed up?
As you have heavy load, adding a replica set is a good solution, as backup could be taken on secondary node, but be aware that replica need at least three servers (you can have an master/slave/arbiter - where the last need a little amount of resources) MongoDump makes general query lock which will have an impact if there is a lot of writes in dumped database. Hint: try to make backup when there is light load on system.
we operate for our customer a server with a single mongo instance, gradle, postgres and nginx running on it. The problem is we had massiv performance problmes until mongodump is running. The mongo queue is growing and no data be queried. The next problem is the costumer want not invest in a replica-set or a software update (mongod 3.x). Has somebody any idea how i clould improve the performance. command to create the dump: mongodump -u ${MONGO_USER} -p ${MONGO_PASSWORD} -o ${MONGO_DUMP_DIR} -d ${MONGO_DATABASE} --authenticationDatabase ${MONGO_DATABASE} > /backup/logs/mongobackup.log tar cjf ${ZIPPED_FILENAME} ${MONGO_DUMP_DIR} System: 6 Cores 36 GB RAM 1TB SATA HDD + 2TB (backup NAS) MongoDB 2.6.7 Thanks Best regards, Markus
performance issue until mongodump
0 And the answer is: Write your command well! adb backup -apk -shared -all -f c:\XperiaBackup20160618.ab Little '-f' makes all the difference. Share Improve this answer Follow answered Jun 18, 2016 at 11:06 user2229946user2229946 4144 bronze badges Add a comment  | 
Android 5.1.1 on Sony Xperia Z1, not rooted ADB ("Minimal ADB and Fastboot") on Windows 8 Hello knowledgeable people, I am on the way to root my Xperia Z1 for the first time and am trying to use "Minimal ADB and Fastboot" to backup my phone using an ADB command like adb backup -apk -shared -all c:\XperiaBackup20160618.ab All steps in the process go well ... ADB recognizes the device, I get to see all screens I'm supposed to see, I see the files being backed up on my phone screen during the process, the time the process takes is appropriate ... except that no backup file shows up on my computer. Not even a 0 KB one, like I read elsewhere. After trying to find it literally everywhere manually and with Windows search, I checked whether ADB is sandboxed by my Comodo Internet Security (it is not for all I can see). To see if something else works, I then tried pulling only my SD card using ADB pull, but got the same result: The process runs smoothly, progress percentage ticks steadily up in the command prompt and the file names copied are shown, but in the end I have no files nowhere. What am I missing ...? (Note, I read that problems between certain ADB versions and PC systems can be solved by downloading alternative releases of the Android SDK, but I prefer not to download a 1 GB suite just for backing up my phone. I'll do it, though, if you guys say it'll work then.) Thanks a lot!
ADB backup and pull commands create no files on PC
0 Simple way to install this plugin and create your WordPress backup and download it : https://wordpress.org/plugins/duplicator/ Share Improve this answer Follow answered Jul 4, 2016 at 15:13 R.K.BhardwajR.K.Bhardwaj 2,18811 gold badge1414 silver badges2727 bronze badges Add a comment  | 
I am making a WordPress website and I might create new builds. How will I keep back up or save my latest builds, I would prefer to use and manage builds through github. Is there a particular way that I can do it in WordPress?
How to keep backup or save latest WordPress build in Github
0 Having only the .git file will not allow you to reset everything. You can duplicate a repostiory, but the chances of losing/damaging a USB are much higher than that of the servers (professionally managed) breaking down. Share Improve this answer Follow answered Jun 11, 2016 at 22:19 janezdujanezdu 4811 silver badge88 bronze badges 1 He suggested using rsync on his .git for which the cloud-storage version is already the backup, so he's suggesting an additional backup of the actual master: the cloud version is not the true master. You are, however, correct that a USB device is probably much more likely to break. But the sense of control is lacking with cloud storage. :-) – torek Jun 11, 2016 at 22:22 Add a comment  | 
I was thinking at, for example, I have a website versioned using git (connetted to an online repo), but I m really carefull, so I want to backup my website to an USB device too, maybe with an rsync command. What if I copy ONLY the .git folder? In the case all got broken (pc, server and online repo) except that my precius USB device, can I completely restore my work (at the last commit) from the only .git folder? Thanks
BackUp using .git?
0 Many backup tools use snapshots. Then, they'll copy the locked file directly from the snapshot rather than coping it directly from the filesystem. If you're on Windows you should check Windows VSS, see the Microsoft documentation for more details. Otherwise, if the filesystem you're on supports snapshots check its documentation as well. Third party tools You can use the subprocess Python module to run third-party tools which will take snapshots for you. Microsoft VSS In case you want to do it by yourself you might need modules from the Win32 API such as win32com module. There is also on Github a project that seems to do the job: pyshadowcopy Filesystem Snapshot Depending on the filesystem features, you might find python modules or tools allowing you to take a snapshot. Share Improve this answer Follow answered Jun 6, 2016 at 8:26 payet_spayet_s 7133 bronze badges Add a comment  | 
I am trying to create a google drive like backup program using python that backs up to a Linux box that will further backup to an off site place tbd. I have ran into a few interesting coding and computer challenges in doing this. The one I am working on right now has to do with "locked" files. So what do I mean by this? In windows 7 if you create a .txt file you can open it in notepad(any program) and at the same time you can open it in a python program. If you make a change in the .txt file and save the change but BEFORE closing it you can still open and see the changes in pythn. Now change the file to a .docx windows file and open it with word 2007. While opened in word you cannot access it with in python until the user closes it. Now if you look at google drive, the desktop install not the web only variety, you can open a .docx file and change it. Once you save it but BEFORE closing google drive has already synched the file. Google drive must have some sort of lower level access to the file than the simple python file.open() command. So here is the question. Does anyone know of a way to access files in python in such a way as to keep me from having to wait for the user to close the file. Edit 1: Let me further explain. Once I have created an sqlite database that has all the files and directories I will then use the win32file.ReadDirectoryChangesW() function to monitor for changes. My problem stems from the fact that when setting up the application of first install/run it must catalog all files and files that are open in windows office are locked and cannot be cataloged. Is there a way around this?
How to access "locked" files for backup program
Not a complete answer, but I'll just gather here a few bits of information that might be useful to somebody eventually. Based on this article https://helgesverre.com/blog/fetch-info-from-soundcloud-api/ First you need to register an app here, where you'll get your client id http://soundcloud.com/you/apps/new $clientid = "*******"; // Your API Client ID $userid = "****"; // ID of the user you are fetching the information for $username = "*****"; // build our API URL $url = "http://api.soundcloud.com/resolve.json? url=http://soundcloud.com/{$username}&client_id={$clientid}"; $user_json = file_get_contents($url); $tracks_url = "http://api.soundcloud.com/users/{$userid}/tracks.json?client_id={$clientid}"; $tracks_json = file_get_contents($tracks_url); $playlists_url = "http://api.soundcloud.com/users/{$userid}/playlists.json?client_id={$clientid}"; $playlists_json = file_get_contents($playlists_url); $followings_url = "http://api.soundcloud.com/users/{$userid}/followings.json?client_id={$clientid}&page_size=200"; // 200 is max $followings_json = file_get_contents($followings_url); $followers_url = "http://api.soundcloud.com/users/{$userid}/followers.json?client_id={$clientid}&page_size=200"; // 200 is max $followers_json = file_get_contents($followers_url); $reposts_url = "http://api-v2.soundcloud.com/profile/soundcloud:users:{$userid}?client_id={$clientid}&limit=1000&offset=0"; // 1000 works $reposts_json = file_get_contents($reposts_url);
I tried searching for a tool that would download all data related to a soundcloud user (uploaded tracks, likes/collection, reposts, playlists, comments, groups etc), backing it up locally, but haven't had luck so far. The user data format is not crucially important, and could be something like XML or JSON. I guess it wouldn't be hard to create it using their API, but I thought it's strange there's no tool like that already, so I wanted to ask here first.
Download / backup all soundcloud user data
0 Yep figured out the details. shell path was not initialised when running as cron Solution was to create as shell script and call it from cron with bash_profile initiated before cron crontab * * * * * bash /path/cron.sh cron.sh . .bash_profile <cron command> Share Improve this answer Follow answered May 22, 2016 at 12:27 de-buggedde-bugged 93544 gold badges1414 silver badges3434 bronze badges Add a comment  | 
I have the below command set up for backing up db:backup --database=mysql --destination=s3 --destinationPath=date +\test/%Y%m%d%H%M%S.sql--compression=gzip Code works fine as command Fails with local.ERROR: exception 'BackupManager\ShellProcessing\ShellProcessFailed' with message 'sh: mysqldump: command not found when the same is run from scheduler. Any suggestions.
Laravel command not working as cron
0 Similar thing is happening to me except idont think I've lost any data yet Unresponsive web SSH slow I pulled out three two drives after shutting down and it came back up more responsive I then updated DSM and did smart test all OK Restarted with sll drives back in and last two took ages to come back but second drive was degraded and remains so Share Improve this answer Follow answered Apr 28, 2016 at 18:46 gebgeb 1 2 Thanks... I'll try that. – Jay Pete Apr 29, 2016 at 19:41 Didn't work, but Synology support were ale to recreate the volume. – Jay Pete May 29, 2016 at 13:01 Add a comment  | 
I have also tried to contact Synology first, but haven't gotten a reply yet. So I was hoping some linux gurus might be able to help out here. I was upgrading some packages on my DS211J Synology drive. During that process the unit became unresponsive. I could not SSH into the drive, access the web interface or access the shared drives. I left it for some hours to finish working in case it was just using all its resources to do the upgrades. It did not return, so I tried rebooting by long pressing the power button. This did not help. So eventually I turned off the power and rebooted it that way. When it came back up it informed me that the volume had crashed. The disks are healthy. I did not see any option to repair the volume. When I SSH'ed into the unit the /volume1 folder was empty, except for an @eadir and @tmp folder. All my files gone. The two drives were configured in a mirror. I have turned off the unit now. But is there any way I can recover the files that were on /volume1? Why have they been deleted? I have not tried to recreate the volume. I have only tried to find the files that were missing. Ironically I was upgrading the packages so I could set up an off-site backup plan. Thanks
Synology volume crashed
0 It works for me, yes. When testing, be sure to call adb shell bmgr run first. After that I called fullbackup, deleted the appdata (not necessary but I wanted to get sure it works) and finally used restore and all the data I configured in the manifest was back. Here are some more informations: https://developer.android.com/training/backup/autosyncapi.html?utm_campaign=autobackup-729&utm_source=dac&utm_medium=blog Share Improve this answer Follow answered Jul 7, 2016 at 12:55 HenningHenning 2,28411 gold badge1919 silver badges3838 bronze badges Add a comment  | 
Has anyone been able to get Android Auto Backup to work? For testing I have done the following: Running adb shell bmgr fullbackup com.company.appname gives me Performing full transport backup Running adb shell bmgr restore com.company.appname gives me Unable to restore package com.company.appname done My original post is here Automatically backing up SQLite database
Has anyone been able to get Android Auto Backup to work?
0 You have created a query and not executed it, try running executeUpdate() on the created query Session session = DatabaseUtil.getSessionFactory().openSession(); session.beginTransaction(); Query query = session.createSQLQuery("BACKUP TO '" + file.getCanonicalPath() + "'"); query.executeUpdate(); session.getTransaction().commit(); session.close(); Share Improve this answer Follow answered Jul 11, 2020 at 15:14 vamsikurrevamsikurre 30933 silver badges55 bronze badges Add a comment  | 
I'm little bit confused of how to perform h2 database "BACKUP" & "RESTORE". I have write some code using hibernate and java, but this not working for now. So, how to do backup & restore when database is used by the application. File file = fileChooser.showSaveDialog(tbTabPaneHome.getScene().getWindow()); if (file != null) { // Save file try { Session session = DatabaseUtil.getSessionFactory().openSession(); session.beginTransaction(); session.createSQLQuery("BACKUP TO '" + file.getCanonicalPath() + "'"); session.getTransaction().commit(); session.close(); } catch (IOException e) { e.printStackTrace(); } }
Backup & Restore h2 database
0 I know this question is old, but I stumbled upon exactly the same question. For me, just adding the missing fields directly like in: INSERT INTO frequencies_audit select *,"insert" as operation from frequencies where id = NEW.id worked fine for me. For more realistic cases, you would have to join. Share Improve this answer Follow edited Mar 20, 2019 at 19:27 double-beep 5,2531717 gold badges3636 silver badges4343 bronze badges answered Mar 20, 2019 at 19:08 Frank WuebbelingFrank Wuebbeling 2366 bronze badges Add a comment  | 
I found this question MYSQL Trigger Update Copy Entire Row Where the suggestion to use the following code answer partially to my personal question to perform a row backup after altering a DB row: DROP TRIGGER auditlog CREATE TRIGGER auditlog AFTER UPDATE ON frequencies FOR EACH ROW BEGIN INSERT INTO frequencies_audit select * from frequencies where freqId = NEW.freqId; END; The problem is that I like to insert additional information to the backuped row so I thought that add variables could do the trick. My question is, is this the right procedure? INSERT INTO frequencies_audit select *, @variable, 'my_value' from frequencies where freqId = NEW.freqId;
Use mysql triggers to backup row after updating events
One simple way is to use the builtin command line tool tablediff.exe. It can compare two tables/views, and print out the differences. The tablediff utility is used to compare the data in two tables for non-convergence, and is particularly useful for troubleshooting non-convergence in a replication topology. This utility can be used from the command prompt or in a batch file to perform the following tasks: A row by row comparison between a source table in an instance of Microsoft SQL Server acting as a replication Publisher and the destination table at one or more instances of SQL Server acting as replication Subscribers. Perform a fast comparison by only comparing row counts and schema. Perform column-level comparisons. Generate a Transact-SQL script to fix discrepancies at the destination server to bring the source and destination tables into convergence. Log results to an output file or into a table in the destination database.
I have to recover a SQL 2008 R2 database for a POS system that broke down without proper backups in place. The .BAK file has been recovered, but was corrupted. However, I was able to retrieve most of the data and get it back into usable shape. My problem now is as following: I have database A, which is a fresh installation for the POS system, and database B, which is the recovered .BAK file. Most of the tables in B are missing their index values, while A has an intact structure, but is (obviously) lacking all the valuable data. How would I go about merging the two, so that I get a fully-indexed database with the correct structure?
Merge SQL databases keeping one index
0 OneDrive for Business has a download that will allow you to synchronize a directory locally. https://onedrive.live.com/about/en-us/download/ For a Linux platform, you should be able to use onedrive-d found here: https://github.com/xybu/onedrive-d Share Improve this answer Follow answered Mar 6, 2016 at 17:46 Kelly CrossleyKelly Crossley 1111 bronze badge 2 As I said in the question, our computers cannot hold the amount of data we need to sync, they're stored on (several) external drives. – senseiwa Mar 6, 2016 at 18:42 Linux Github repo has changed to github.com/xybu/onedrived-dev – Rafa Moyano Aug 26, 2021 at 18:06 Add a comment  | 
I've searched large and deep, but nothing is available, as far as I can see. TLDR: How can I use rsync with a SharePoint installation? (Or something like rsync) Long description We have a large install base of Macs (~50%), Windows (~40%), and Linux (~10%), so our environment is pretty heterogeneous. Being an experimental job we produce a considerable amount of experimental datasets that we need to share, and more importantly, backup. Right now we use external hard drives to store these files and folders, since our computers cannot hold these amount of data (50GB++, for instance, per dataset). And when we need to share, we "physically" share. We mainly we use rsync with some kind of backend (what kind is not important), but this solution requires computers to be left turned on, and act as servers. For reasons that I will not bother you with, we cannot leave a computer on after work. Having OneDrive for Business seemed a very promising technology to use, since we have more than 1TB per user. We could start syncing out datasets from our computers and hard drives, and we could share even when computers are turned off. We are aware that we may hit some drawbacks, as not being able to actually share, having some limits about the number of objects (files/directories), but we will handle them later. I prefer rsync, but right now we're open to any solution.
Using with rsync to MS SharePoint
0 Try the helicopterizer for Backup and Restore for Docker Container in the Cloud Providers. https://github.com/frekele/helicopterizer Share Improve this answer Follow answered Apr 18, 2016 at 17:21 frekelefrekele 64466 silver badges1313 bronze badges Add a comment  | 
I'm testing Nexus 3 from Docker container and I'm using https://github.com/sonatype/docker-nexus/blob/94d654faa2166b60fe2a4ad9629ff418a305dcb9/oss/Dockerfile. The issue is that when I upload artifact to the Nexus I cant find it in the file system in order to create backup. The folder /sonatype-work is empty. I've successfully use this approach for backup of Nexus 2. Could you please advice me where Nexus 3 stores its artifacts.
Nexus 3 Docker container and Backup
0 Please try with the latest version of SSMS that you can download and install from SQL Server Management Studio Download page Azure SQL Database has a numerous new features being added in a fast pace manner. SSMS from this unified download page provides up-to-date supports for the latest features in Azure SQL Database. Share Improve this answer Follow answered Feb 17, 2016 at 1:17 Eric KangEric Kang 49133 silver badges33 bronze badges 1 Thanks for the suggestion, I download this and got the same error: – Paul R Feb 17, 2016 at 6:48 Add a comment  | 
I have a SQL Azure database which is approximately 10GB in total size. I wanted to have a local copy of the database for development so I saved an export of the database to my storage account and downloaded it. I was a little suspicious when the backup size was 500MB but I backed up the database twice, the file size was the same both times. I am using SSMS 2014 on a SQL Server 2012 database and selecting 'import data tier application', the backup appears to be working BUT I get an error with the largest table. The error is: TITLE: Microsoft SQL Server Management Studio Data plan execution failed with message One or more errors occurred. (Microsoft.SqlServer.Dac) ADDITIONAL INFORMATION: One or more errors occurred. (mscorlib) One or more errors occurred. One or more errors occurred. Unknown block type. Stream might be corrupted. (System) I cannot find any examples of others with this problem, but it can't be only me that has it? FYI When I try to use SSMS 2012 to import that database I get the following error: TITLE: Microsoft SQL Server Management Studio Could not load schema model from package. (Microsoft.SqlServer.Dac) ADDITIONAL INFORMATION: Internal Error. The database platform service with type Microsoft.Data.Tools.Schema.Sql.SqlAzureV12DatabaseSchemaProvider is not valid. You must make sure the service is loaded, or you must provide the full type name of a valid database platform service. (Microsoft.Data.Tools.Schema.Sql) Which is why I installed 2014. # UPDATE, After installing SSMS 2016 I got the same error: TITLE: Microsoft SQL Server Management Studio Data plan execution failed with message One or more errors occurred. (Microsoft.SqlServer.Dac) ADDITIONAL INFORMATION: One or more errors occurred. (mscorlib) One or more errors occurred. One or more errors occurred. Unknown block type. Stream might be corrupted. (System)
SQL Azure Export and Import into local SQL Server 2012 - Unknown block type. Stream might be corrupted
0 I was also getting same error whet tried to installed MarsAgent on win2016 server. I tried many articles but no luck. When I selected installed with window updates it shown me error for window update service. so I checked Windows update service found it was in disabled state, hence I set startup type automatic and started service then tried. Issue resolved no error found. Share Improve this answer Follow answered Jun 24, 2019 at 14:06 user11692717user11692717 1 Add a comment  | 
I have an issue, I am trying to upgrade the MarsAgent (Azure agent) on a Windows 7 PC. When I try to update it I get the error: "An unexpected error occurred during the installation. For more details check the setup error logs.(Error ID: 116)". You then press OK on that error and I am then presented with "Error starting the Microsoft Azure Recovery Services Agent Setup Wizard" Can you please help!!!
Azure MarsAgent Upgrade Failed
0 There is no direct support for DropBox in Duplicati. Others have reported using a local folder under the DropBox folder as a destination, such that the DropBox client synchronizes the folder for you. Share Improve this answer Follow answered Mar 10, 2016 at 7:37 KennethKenneth 71811 gold badge55 silver badges77 bronze badges Add a comment  | 
I'm trying to backup Folders from local drive to Dropbox using Duplicati command in Command prompt. (Backup should be Incremental) C:\Users\Desktop\Office_Works\Duplicati\Duplicati 1.3.4\Duplicati>Duplicati.CommandLine.exe backup a https://www.dropbox.com/ Enter passphrase: ** Confirm passphrase: ** **Unable to find backend for: https://www.dropbox.com/** "a" is my folder in local drive. Now I want to know how to make a connection with Dropbox using command lines. Is any particular way to connect Dropbox using duplicati commands?
Command line for Dropbox connection in Duplicati backup
I do not know why --exclude does not work here.. but I modified the find command and I managed to let this thing work. FIND="`find $TOSAVE -mindepth 1 -type d \( -path $TOSAVE/backup \) -prune -o -mtime -1 -print`"
There are a lot of posts about this, I know, but I tried all and still can't get it working. If this is my folder to backup: /home/user/thingstobackup the script will create a "backup" folder here and inside another folder named as the date of today. The daily backup is copied inside. No matter how I use rsync, the "backup" folder will be always copied inside itself starting from the 2nd run of the script. 1st run: /home/user/thingstobackup /home/user/thingstobackup/backup/2016-01-13 #and correct file inside 2nd run: /home/user/thingstobackup/backup/2016-01-13 #with correct file inside /home/user/thingstobackup/backup/2016-01-14 #with correct file inside I will shorten the path here.. ../backup/2016-01-14/2016-01-13/and backed up file inside.. ../backup/2016-01-14/backup/ ../backup/2016-01-14/backup/2016-01-13/and backed up file inside.. ../backup/2016-01-14/backup/2016-01-14/empty After the 2nd run, the backup folder is copied inside every daily backup folder. The script: #!/bin/bash export PATH=$PATH:/bin:/usr/bin:/usr/local/bin # directory to backup TOSAVE=/home/user/thingstobackup TODAY=`date +%F` BDIR=backup BACKUPDIR=$TOSAVE/$BDIR/$TODAY/ # options for rsync OPTS="-aq --exclude='backup/*'" # find daily new file FIND="`find $TOSAVE -mindepth 1 -mtime -1 -print`" # MAIN # # copy daily found inside new created daily folder [ -d $TOSAVE/$BDIR/$TODAY ] || mkdir -p $TOSAVE/$BDIR/$TODAY rsync $OPTS $FIND $BACKUPDIR # delete file older than 2 weeks = 14 days find $TOSAVE -mtime +14 -exec rm -rf {} \; No matter how I use --exclude='backup/*'" --exclude='backup' || --exclude 'backup/*' || --exclude 'backup' It does not exclude that folder.. Yes I read the rsync manual: --exclude=PATTERN exclude files matching PATTERN I'm sure I'm missing something but I just can't find it! Thanks in advance mates
rsync --exclude 'folder', copies that folder anyway
I assume you are asking about backing up your Windows server running as a VM instance on Azure. Below is the latest Azure VM backup guide from Microsoft. Hope it helps! Azure virtual machine back up
This is probably a basic question but all I am looking for in Azure is the ability to back up files on my Windows Server at a scheduled time. MS SQL, MySQL and Web Site Files. I have created a Trial account and a Storage plan but need a pointer to which section of the Dashboard I should be visiting - it isn't immediately apparent!
Basic Azure Storage getting started. Create Scheduled File back up
0 As far as I know, you can use Bareos for that. You can install the dummy package bareos via apt-get, and that will pull in with it the bareos-storage package, which should include support for NDMP. Share Improve this answer Follow answered Jun 2, 2016 at 11:48 user986730user986730 1,31611 gold badge1313 silver badges1313 bronze badges 1 I tried to use BareOS for many times, but it does not success. I got many errors, and after that, I found out BareOS use ndmjob for its NDMP function. I also tried to run ndmjob, but it does not success too. – Waveter Jun 10, 2016 at 2:56 Add a comment  | 
I'm looking for an open source which implement NDMP protocol in Debian, but not successful yet. I tried to install and run NDMP SDK (https://code.google.com/p/ndmp4linux/downloads/detail?name=ndmpkit.v3.2.tar.gz), ndmjob, opendmp, ndmfs, but some of these tools can not be installed in Debian (NDMP SDK), the other running with error when I use ndmpcopy tool to copy between two volume. So, is there any open source tool which implement NDMP protocol running successful in Debian
NDMP Daemon for debian
0 In MFC GUI go to file->preferences->debugging-> 1.debug range 1-200 2. debug file name debug.txt 3. Use these settings permanently -> relaunch GUI now... run your session. now check current session debugs in Programdata->omniback->tmp. Share Improve this answer Follow answered Mar 19, 2019 at 6:19 Neeraj BansalNeeraj Bansal 39077 silver badges2323 bronze badges Add a comment  | 
I perform incremental backup of my files on more servers everyday . The last two days time to backup is longer than usual: Last monday: Start time 21:00:05 End Time: 23:35:34 Yesterday: Start time 21:00:05 End Time: 08:40:31 (today) There are no errors in logs In log of this execution I do not see any pieces of information beetwen 21:26:06 and 08:40:31 [Normal] From: [email protected]_company.com "HP:Ultrium 3-SCSI_1_a01mbackup" Time: 21.09.2015 21:26:01 COMPLETED Media Agent "HP:Ultrium 3-SCSI_1_a01mbackup" [Normal] From: [email protected]_company.com "File" Time: 22.09.2015 08:40:31 Backup Statistics: Session Queuing Time (hours) 0,00 ------------------------------------------- Completed Disk Agents ........ 8 Failed Disk Agents ........... 0 Aborted Disk Agents .......... 0 ------------------------------------------- Disk Agents Total ........... 8 =========================================== Completed Media Agents ....... 1 Failed Media Agents .......... 0 Aborted Media Agents ......... 0 ------------------------------------------- Media Agents Total .......... 1 =========================================== Mbytes Total ................. 3894 MB Used Media Total ............. 1 Disk Agent Errors Total ...... 0 How can I analyse this problem? Are there any detailed pieces of information?
HP Data Protector - very long backup duration
0 You likely found your answer by now doing some more research but I had the same question and found the following useful answers: https://serverfault.com/a/128617/362650. This one is especially useful if you happen to need to set back your virtualized server to your physical machine or need to copy/clone your virtual machine. It utilizes dd to create disk images. https://serverfault.com/a/324443/3626502. This additionally shows how to convert the disk image to a VirtualBox virtual disk image using qemu-img convert -O vdi. You could also use tools like Mondorescue or CloneZilla for making the images and the VirtualBox command line tool VBoxManage convertfromraw for converting the disk image to an virtual disk image https://www.virtualbox.org/manual/ch08.html#idm4974. Share Improve this answer Follow edited Apr 13, 2017 at 12:13 CommunityBot 111 silver badge answered Jun 27, 2016 at 22:45 d5c0d3d5c0d3 1 Add a comment  | 
I have a server running on Ubuntu server LTS 14.04 and I need to do an entire copy of the operating system including apps, files, databases, everything. I've read the most straightforward way to do this is make an O.S image backup but I didn't find a practical tutorial about how to do this and how to restore this image in any server or virtual server.
Ubuntu server extract O.S Image
0 To backup your database I suggest to use mysqldump, that is more safer then a simple file copy. For volume backup you can also run a container, link the volume and tar the contents together. In both cases you can use additional containers or process injection via docker exec. Share Improve this answer Follow answered Sep 10, 2015 at 11:48 LexandroLexandro 77555 silver badges1414 bronze badges 1 I would have to stop the web application while doing a backup, correct? WHat I'm wondering is if there is a way to do this without stopping the web application – Sharan Nambiar Sep 11, 2015 at 0:18 Add a comment  | 
I am in the process of building a simple web application using NodeJS that persists data to a MySQL database and saves images that have been uploaded to it. With my current setup, I have 4 Docker containers - 1 for the NodeJS application, 1 for the MySQL server, 1 Volume Container for the MySQL Data and 1 Volume container for the uploaded files. What I would like to do is come up with a mechanism where I can periodically take backups of both volume containers automatically without stopping the web application. Is it possible to do this and if so, what's the best way? I have looked at the Docker Documentation on Volume management that covers backing up and restoring volumes, but I'm not sure that would work while the application is still writing data to the database or saving uploaded files.
What's the correct way to take automatic backups of docker volume containers?
0 Just print what you do: DECLARE @table NVARCHAR(MAX) = 'tab'; DECLARE @sql NVARCHAR(MAX) = 'select * into '+@table+'_' +'convert(date, getdate()) from '+@table; SELECT @sql; you will get: select * into tab_convert(date, getdate()) from tab You need to pass date with table name like: SqlFiddleDemo DECLARE @table NVARCHAR(MAX) = 'tab'; DECLARE @new_table NVARCHAR(MAX) = @table + '_' + CONVERT(NVARCHAR(100), GETDATE(),105); DECLARE @sql NVARCHAR(MAX) = 'select * into ' + @new_table + ' from '+ @table; SELECT @sql; /* Result select * into tab_09-09-2015 from tab */ -- EXEC(@sql); Share Improve this answer Follow edited Sep 9, 2015 at 12:33 answered Sep 9, 2015 at 12:22 Lukasz SzozdaLukasz Szozda 169k2525 gold badges252252 silver badges294294 bronze badges Add a comment  | 
I am looking to backup a table and auto add the date to the end of the table name. Here is what I have declare @table char(36)= 'template_fields' EXEC('select * into '+@table+'_'+'convert(date, getdate()) from '+@table) And I want the end result to look something like template_fields_09-09-2015 What am I missing here?
SQL Server EXEC backup table with date dynamically
0 You would need to use the latest snapshot (or the most recent snapshot before the truncate if there have been additional snapshots after the truncate). Cassandra uses hard links when creating snapshots, which is why the sizes of those snapshots are so different. Do get the size for each individual snapshot you have to run separate du commands for each of them, that way you can see what the size was at the time of the snapshot. So rather than relying on du alone list the directories by date (ls -ltr), pick the snapshot you want to restore, and then use the du command to verify it is the size you're expecting. Share Improve this answer Follow answered Mar 9, 2017 at 22:02 GeneGene 32111 silver badge1010 bronze badges Add a comment  | 
Data has been deleted from a production DB using a TRUNCATE command accidentally and I don't know how I should restore it. I've read about auto_snapshot [1] and fortunately this option is on. We have a bunch of snapshots (listed below) in a snapshots folder and we don`t know which of them we should use to restore the data. root@server:/raid0/cassandra/data/raw_data_keyspace/raw_buy_hits-d5e2fc5005f411e5bc39c93f22adf770/snapshots# du 44 ./1439296902349-raw_buy_hits 44 ./1439296723590-raw_buy_hits 48 ./1439296608175-raw_buy_hits 171964 ./1439296089074-raw_buy_hits 171032 ./1439203561681 44 ./1439296856042-raw_buy_hits 44 ./1439296234966-raw_buy_hits 343224 . I didn't find any mention of this problem when I read Cassandra's docs. Which of the snapshots should we use? Should it be one of them or should we use all of them in order to restore all of the data that we lost?
Which snapshot should I use to restore data after accidental truncate?
0 I suspect the credentials you are using for the database in your Connection Strings in the portal do not have sufficient rights to script out the database. The backup routine needs to script out your DB and extract all your data to work properly. Here is a helpful article on how to create a backup on Azure WebApps and Database. https://azure.microsoft.com/en-us/documentation/articles/web-sites-backup/ Share Improve this answer Follow answered Jul 25, 2015 at 14:17 Mark BrownMark Brown 8,31722 gold badges1717 silver badges2121 bronze badges Add a comment  | 
I'm trying to get my website (file system & database) to backup in Azure portal but the backup does not seem to ever complete. The steps i have taken are as follows: I went to my website Clicked on backups I chose my storage account and also added a database backup too. I hit save I wanted to test that the backup was working so i clicked "backup". It stated that "Successfully started backup for web app '...'. I have left it for an hour and i cannot see any backups in my storage account. The site is quite small and so is the database. Any ideas what might be going wrong? Side note: The same kind of thing happens in the new preview portal too... It just doesn't seem to finish the backup process. I followed this tutorial: https://www.mssqltips.com/sqlservertip/3057/windows-azure-sql-database-backup-and-restore-strategy/ SIDE NOTE: I removed the database backup and this allowed the process to complete... I wonder if there is a bug in the backup process?
Website (file system & database) backups never seem to complete
0 As "That other guy" commented the solution here was to run the dos2unix convert command on the bash file: sudo dos2unix /etc/init.d/backup.sh and running it using bash command not sh command: sudo bash /etc/init.d/backup.sh Share Improve this answer Follow answered Jun 16, 2015 at 8:36 TTRTTR 1122 bronze badges Add a comment  | 
I followed this question: How to backup filesystem with tar using a bash script? But when I run the script it gives following error: : not found/backup.sh: 2: /etc/init.d/backup.sh: : not found/backup.sh: 5: /etc/init.d/backup.sh: : not found/backup.sh: 7: /etc/init.d/backup.sh: : not found/backup.sh: 10: /etc/init.d/backup.sh: : not found/backup.sh: 12: /etc/init.d/backup.sh: /etc/init.d/backup.sh: 13: /etc/init.d/backup.sh: Syntax error: "(" unexpected Here's my script: #!/bin/bash #TODAY=$(date +%F) #HOST=$(hostname) mybackupname="backup-fullsys-$(date +%Y-%m-%d).tar.gz" # Record start time by epoch second start=$(date '+%s') # List of excludes in a bash array, for easier reading. excludes=(--exclude=/FILES/Media/Programs/Mint/Backup/$mybackupname) excludes+=(--exclude=/proc) excludes+=(--exclude=/lost+found) excludes+=(--exclude=/sys) excludes+=(--exclude=/mnt) excludes+=(--exclude=/MEDIA) excludes+=(--exclude=/BACKUP) excludes+=(--exclude=/FILES) if ! tar -czf "$mybackupname" "${excludes[@]}" /; then status="tar failed" elif ! mv "$mybackupname" FILES/Media/Programs/Mint/Backup/ ; then status="mv failed" else status="success: size=$(stat -c%s backups/filesystem/$mybackupname) duration=$((`date '+%s'` - $start))" # Log to system log; handle this using syslog(8). logger -t backup "$status" Anyone see where I'm going wrong here?
Tar backup error
0 You can use the commands in this SO question to verify you local and remote are same. Just iterate through each branch and check that they are the same In case your backup is just backing up the bare repository , you can do a unix diff command to verify that your backup is good Share Improve this answer Follow edited May 23, 2017 at 12:06 CommunityBot 111 silver badge answered Apr 19, 2015 at 18:30 Biswajit_86Biswajit_86 3,69122 gold badges2222 silver badges3636 bronze badges Add a comment  | 
We have disaster recovery plans that mean we take a backup of our git installation (Atlassian Stash in our case) and restore it on a test server to verify the backup was a good one. If the restore process fails then we have a problem but we're wondering about going a bit further when the restore is a success and verifying the restored repositories. Would using git fsck be a good idea here? Running it locally as a developer throws some dangling or unreachable objects, I believe this is a normal thing that happens. But on a fresh git clone there shouldn't be any issues right? So if fsck had errors then we're having a bad time? As a second option we could also point our CI server at a restored repository and have it build and run tests. As our main branch should always be healthy then any build failures would indicate an issue. Any other ideas on verifying a repository is good and healthy? Gog
Verifying a git repository (Stash) restore
0 function backup_Eventtable() { $this->load->dbutil(); $prefs = array( //'tables' => array('hotel_accomodation'), // Array of tables to backup. 'ignore' => array(), // List of tables to omit from the backup 'format' => 'txt', // gzip, zip, txt 'filename' => ''.date("Y-m-d-H-i-s").'-mybackup.sql', // File name - NEEDED ONLY WITH ZIP FILES 'add_drop' => TRUE, // Whether to add DROP TABLE statements to backup file 'add_insert' => TRUE, // Whether to add INSERT data to backup file 'newline' => "\n" // Newline character used in backup file ); $backup =& $this->dbutil->backup($prefs); //I tried it with and without the next 2 Lines. $this->load->helper('file'); write_file('backup_db/'.date("Y-m-d-H-i-s").'-mybackup.sql', $backup); } try this code, may be useful.... Share Improve this answer Follow answered Apr 11, 2015 at 9:30 Abdullah Umar BabselAbdullah Umar Babsel 15644 bronze badges Add a comment  | 
I am using CodeIgniter Backup database utility class. But it is giving me an error of Unsupported feature of the database platform you are using. Filename: E:\xampp\htdocs\zafarsir\system\database\drivers\mysqli\mysqli_utility.php Line Number: 82 Here is my code $this->load->dbutil(); // // Backup your entire database and assign it to a variable $backup =& $this->dbutil->backup(); // Load the file helper and write the file to your server $this->load->helper('file'); write_file('http://localhost/zafarsir/database/mybackup.gz', $backup); // Load the download helper and send the file to your desktop $this->load->helper('download'); force_download('mybackup.gz', $backup); $prefs = array( 'tables' => array('user', 'party'), // Array of tables to backup. 'ignore' => array(), // List of tables to omit from the backup 'format' => 'txt', // gzip, zip, txt 'filename' => 'mybackup.sql', // File name - NEEDED ONLY WITH ZIP FILES 'add_drop' => TRUE, // Whether to add DROP TABLE statements to backup file 'add_insert' => TRUE, // Whether to add INSERT data to backup file 'newline' => "\n" // Newline character used in backup file ); $this->dbutil->backup($prefs); It is giving me error on this line $backup =& $this->dbutil->backup(); I am new to CodeIgniter plz helpe me
CodeIgniter Backup Database is not working
0 See 127 Return code from $?. Check if busybox is installed. Share Improve this answer Follow edited May 23, 2017 at 10:24 CommunityBot 111 silver badge answered May 3, 2015 at 11:57 VadzimVadzim 25.5k1212 gold badges146146 silver badges156156 bronze badges Add a comment  | 
I need to create a customized flashable ROM, by customized i mean i will add some apps into it and redistribute this new ROM. Now, first of all I've read that I could create a flashable ROM through Nandroid backups. but somehow when installed and execute a Clockwork backup it returns just this: Starting backup... Running with the following commands : -o --utc --storage /sdcard -pd -r Using default shell exitcode[127] What's happening? Actually this is not a Smartphone but an Android Head Unit called Full AOSP on Mstar Cedric3 I can put zip file to upgrade the firmware instead the official readme suggest that i should extract all zip contents into a micro sd card and the device will update the contents one by one like first CIS, boot, recovery, system, data etc etc any help?
Nandroid backup failed exitcode[127]
0 You can count them ahead of time to make the decisions. Here's one way to do it. Unfortunately FORFILES doesn't allow you to easily find files newer than 4 days ago, so we find the older ones and subtract from the total. Loop through the files and add the ones that match your deletion criteria to a text file: forfiles /p Z:\SQL /s /m *.bak /d -4 /c "CMD /c ECHO @fname" > BAKfiles.tmp forfiles /p Z:\SQL /s /m *.trn /d -4 /c "CMD /c ECHO @fname" > TRNfiles.tmp Next, use the FIND /c command to count the lines that have quotation marks in them, and use FOR /F to parse the total from the output after the colon. You need to do this for each file type, but I'm only showing one for the general form. FOR /F %%C IN ('dir *.BAK^| find "File(s)"') do SET BAKTOTAL=%%C FOR /F %%C IN ('dir *.TRN^| find "File(s)"') do SET TRNTOTAL=%%C FOR /F "tokens=2* delims=:" %%C IN ('cmd /c find /c """" BAKfiles.tmp') DO SET BAKDEL=%%C FOR /F "tokens=2* delims=:" %%C IN ('cmd /c find /c """" TRNfiles.tmp') DO SET TRNDEL=%%C SET /A BAKCOUNT=%BAKTOTAL% - %BAKDEL% SET /A TRNCOUNT=%TRNTOTAL% - %TRNDEL% Then evaluate the count with IF to see if you have enough files to meet your threshold, and run your deletion commands. IF %BAKCOUNT% GEQ 3 IF %TRNCOUNT% GEQ 3 GOTO DELETEFILES ECHO Not enough files were found. Skipping deletion. GOTO ENDING :DELETEFILES forfiles /p Z:\SQL /s /m *.bak /d -4 /c "cmd /c del /q @path" forfiles /p Z:\SQL /s /m *.trn /d -4 /c "cmd /c del /q @path" :ENDING del /q BAKfiles.tmp del /q TRNfiles.tmp Share Improve this answer Follow edited Mar 20, 2015 at 18:08 answered Mar 20, 2015 at 14:34 GuitarPickerGuitarPicker 31622 silver badges1111 bronze badges 0 Add a comment  | 
This is my first post & take in mind my native language isn't English. .bat file so far: C: Cd\ forfiles /p Z:\SQL /s /m *.bak /d -4 /c "cmd /c del /q @path" forfiles /p Z:\SQL /s /m *.trn /d -4 /c "cmd /c del /q @path" What I want with this file: I want it to delete 4 days old backup & log files. This .Bat file works with that - the risk comes now - if the backup/log program for some reason stop and the person in charge is sick for 4 days - the .bat file will delete the only valid backup I have. So what I ask for: Is there someone out there that knows a way to switch in the .bat file so that it looks for at least 3 days worth of files - NOTE Not just 3 files, but it has to be from 3 previous days in a row. Example: The system takes backup Monday (1), system crashes Thursday (2). When the scheduler runs the backup at Tuesday (4) it will just delete the Monday backup (which is my only valid one) because its 4 days old. That's where it comes in that I want it to check: Do I have any other .bak/.trn files the last 3 days Yes/No? No = Don't delete. Yes = Delete everything over 4 days old. Hope I have explained myself somewhat understandably. Thanks for reading, hope you can help me out!
.Bat File - deleting 4 days old backup if atleast 3 days old file already exist
start by creating a dictionary where each key is a volume id import dateutil.parser as p d = {} for snapshot in snapshots: snapshot["created_at"] = p.parse(snapshot["created_at"]) try: d[snapshot["volume_id"]].append(snapshot) except KeyError: d[snapshot["volume_id"]]=[snapshot] now you should be able to work with it much easier from operator import itemgetter d2 = {} for volume,data_list in d.items(): d2[volume] = sorted(data_list,key=itemgetter("created_at"),reverse=True)[:5] d2 should now contain only the 5 most recent snampshots for any given volume
I have the code below which looks for 2 keys and deletes the old one. That means I only have one data-retention def delete_old_snap(self, volumeid): list_snap = self.snapshots() def doubles(l): keys = [i["volume_id"] for i in l if i["volume_id"] == volumeid] keys = {k for k in keys if keys.count(k) > 1} return zip([[d for d in l if d["volume_id"] == k] for k in keys]) for t in doubles(list_snap): snap_id_to_delete = t[0][0]['id'] if ( t[0][0]['created_at'] < t[0][1]['created_at'] ) else t[0][1]['id'] My goal is to allow for exemple 5 data-retentions like def delete_old_snap(self, volumeid, retention=5): list_snap = self.snapshots() #keep retention keys (based on ['created_at'] ) #loop for deleting the old one if found. one sample of data: [ {u'status': u'available', u'os-extended-snapshot-attributes:progress': u'100%', u'description': u'Daily snapshot', u'name': u'snap-DAILY-WEB-OCS_HOME', u'created_at': u'2015-01-22T14:09:30.000000', u'id': u'02ee7feb-6919-4732-9eb3-8c6f721dc426', u'volume_id': u'edcaac08-5f6a-4bf7-906c-d6ed9cb20b22', u'size': 2, u'os-extended-snapshot-attributes:project_id': u'a0998a6710f84dc78550393119b41721', u'metadata': {}}, ....]
keep n retention and delete old one
0 Your first example is correct. public class MyBackupAgentHelper extends BackupAgentHelper { static final String DEFAULT_PREFS = "packagename_preferences"; static final String OTHER_PREFS = "packagename_other_preference"; // A key to uniquely identify the set of backup data static final String PREFS_BACKUP_KEY = "prefs"; @Override public void onCreate() { SharedPreferencesBackupHelper helper1 = new SharedPreferencesBackupHelper(this, DEFAULT_PREFS); SharedPreferencesBackupHelper helper2 = new SharedPreferencesBackupHelper(this, OTHER_PREFS); addHelper(PREFS_BACKUP_KEY, helper1); addHelper(PREFS_BACKUP_KEY, helper2); } } Share Improve this answer Follow answered Feb 8, 2016 at 5:08 ShellDudeShellDude 59866 silver badges1313 bronze badges Add a comment  | 
I have trouble unterstanding the keyPrefix of the addHelper() method. Does it need to be unique for each BackupAgentHelper Class Instance or for each SharedPreferencesBackupHelper ? I want to backup two or more sets of SharedPreferences: Example: public class PrefsBackupAgent extends BackupAgentHelper { // Allocate a helper and add it to the backup agent @Override public void onCreate() { SharedPreferencesBackupHelper user1 = new SharedPreferencesBackupHelper(this, "user1_preferences"); addHelper('prefs', user1); // <-- keyPrefix same to both addHelper Calls? SharedPreferencesBackupHelper user2 = new SharedPreferencesBackupHelper(this, "user2_preferences"); addHelper('prefs', user2); // <-- } } or does it need to look like that: public class PrefsBackupAgent extends BackupAgentHelper { // Allocate a helper and add it to the backup agent @Override public void onCreate() { SharedPreferencesBackupHelper user1 = new SharedPreferencesBackupHelper(this, "user1_preferences"); addHelper('user1', user1); // <-- or do they need to be unique for each SharedPreferencesBackupHelper ? SharedPreferencesBackupHelper user2 = new SharedPreferencesBackupHelper(this, "user2_preferences"); addHelper('user2', user2); // <-- } } Which one is the correct way? Thank you!
BackupAgentHelper with multiple sets of SharedPreferences