Response
stringlengths 8
2k
| Instruction
stringlengths 18
2k
| Prompt
stringlengths 14
160
|
---|---|---|
0
You can use the below Powershell cmdlet to Delete the backup item as mentioned in the Microsoft documentation here:
Disable-AzRecoveryServicesBackupProtection -Item $myBkpItem -RemoveRecoveryPoints -VaultId $myVaultID -Force
You can also Delete backup data of a VM using the Azure portal by following this document.
Will changing the retention policy delete old backups?
Refer this to know about modifing the retention policies.
Share
Improve this answer
Follow
answered Jul 18, 2022 at 7:24
RKMRKM
1,28511 gold badge44 silver badges99 bronze badges
Add a comment
|
|
I am new to Azure and the Azure Python SDK, and I have some questions.
How do I delete old backups of a VM using Python SDK? I have looked it up on Azure CLI documentation but have not found the delete command.
I am thinking of modifying the retention policy within the policy associated with the backups. Will changing the retention policy delete old backups?
|
How to delete old backup jobs on Azure
|
0
Tldr;
They are no rules per say, it will depends on you use case.
But you can look at the settings by default on elastic managed services.
The minimum number of snapshots must be at least 12, the maximum limit is 100.
But as you mentioned, if you have big master nodes you could have more.
Share
Improve this answer
Follow
answered Jun 29, 2022 at 15:52
PauloPaulo
9,61555 gold badges2121 silver badges3535 bronze badges
2
I think these numbers are referring versions 7.5 and below
– Amir M
Jun 30, 2022 at 6:37
Agreed, I don't think there are recommandation ATM for version 7.5 and above. So to be on the safe side I would follow the previous recommandation.
– Paulo
Jun 30, 2022 at 9:07
Add a comment
|
|
The Elasticsearch documentation states:
A snapshot repository can safely scale to thousands of snapshots. However, to manage its metadata, a large repository requires more memory on the master node. Retention rules ensure a repository’s metadata doesn’t grow to a size that could destabilize the master node.
snapshot-retention-limits
What is a "safe" number of snapshots that will not destabilize the muster node?
I'm using version 8.2 and need to save between 1000-3000 snapshots in my repository, is this safe?
|
What is Elasticsearch snapshot retention rules recommended max_count?
|
0
You can use mysqladmin to start/stop the replication like this:
# mysqladmin -u root -p start-slave
# mysqladmin -u root -p stop-slave
Share
Improve this answer
Follow
answered Jun 26, 2022 at 12:47
Bernd BuffenBernd Buffen
14.7k22 gold badges2424 silver badges4040 bronze badges
1
will it work from the mysql shell in python mode?
– Jules
Jun 26, 2022 at 16:42
Add a comment
|
|
I am using MySQL 8.0.23 docker container to back up an instance using util.dump_instance() from a time delayed replica instance using MySQL shell in Python mode.
How can I stop the replication before I run the instance dump?
I could find no examples in the documentation.
I would run the equivalent of the following statement:
mysql -h"$MYSQL_HOST" -P"$MYSQL_PORT" -u$MYSQL_USER -p"$MYSQL_PASS" -e 'STOP SLAVE SQL_THREAD;'
so starting the replication would be easier on the instance since the binary log will contain the entries to be replicated.
Current code:
import os
import time
TIME = time.strftime("%Y%m%d-%H%M")
MYSQL_HOST = os.getenv('MYSQL_HOST')
MYSQL_PORT = os.getenv('MYSQL_PORT')
BACKUP_PATH = "/backup/"+MYSQL_HOST+":"+MYSQL_PORT+"//"+TIME
util.dump_instance( BACKUP_PATH, {'dryRun': False, 'threads': 4, 'showProgress': True, 'consistent': True})
|
How do you stop MySQL replication using mysql shell 8.0.23?
|
0
Although I cannot test this myself, I think you could do that using Invoke-Command like below:
$servers = "SRV1", "SRV2", "SRV3"
# set the credentials for admin access on the servers
$cred = Get-Credential 'Please enter your admin credentials'
$result = Invoke-Command -ComputerName $servers -Credential $cred -ScriptBlock {
$date = (Get-Date).AddHours(-24).Date
$sessions = Get-VBRComputerBackupJobSession
foreach ($PBackup_job in (Get-VBRComputerBackupJob)) {
$sessions | Where-Object {$_.CreationTime -ge $date} |
Sort-Object CreationTime |
Select-Object @{Name = 'Server'; Expression = {$env:COMPUTERNAME}},
@{Name = 'BackupJobName'; Expression = {$PBackup_job.Name}},
CreationTime, endtime, result, state
}
}
# remove the extra properties PowerShell added and save to CSV
$result = $result | Select-Object * -ExcludeProperty PS*, RunSpaceId
# output on screen
$result | Format-Table -AutoSize
# write to file
$result | Export-Csv -Path 'X:\Somewhere\BackupJobs.csv' -NoTypeInformation
Share
Improve this answer
Follow
answered Jun 20, 2022 at 15:43
TheoTheo
59.1k88 gold badges2525 silver badges4343 bronze badges
3
When i execute the code i got credentials popup along with error: "Connecting to remote server SRV failed with the following error message : The WinRM client cannot process the request."
– StackBuck
Jun 21, 2022 at 8:01
@StackBuck Enable WinRM
– Theo
Jun 21, 2022 at 14:22
I cant because it makes the servers vulnerable to attacks
– StackBuck
Jun 26, 2022 at 10:20
Add a comment
|
|
I have script that checks every 24 hours locally on server the status of all backup jobs along more details.
I want that script to check all my servers, lets say: "SRV1", "SRV2", "SRV3"
How can i manage that?
Here's the script:
$date = (Get-Date).AddHours(-24)
$sessions = Get-VBRComputerBackupJobSession
foreach ($PBackup_job in (Get-VBRComputerBackupJob | Select Name)) {
$PBackup_job_name = $PBackup_job.Name
write "------------ Physical Server Backup Job Name : $PBackup_job_name ------------"
$sessions | where {$_.CreationTime -ge $date} | sort CreationTime | Select CreationTime, endtime, result, state | Format-Table
}
|
How to check Veeam Backup Jobs Status in ALL servers
|
0
The simplest solution would be to use GitHub or Bitbucket and to regularly push the changes you made to the online repository. You will benefit more from the usage of a version control software then from a local backup. You can use either of them for free.
Share
Improve this answer
Follow
edited Jun 16, 2022 at 21:24
answered Jun 16, 2022 at 21:18
mab0189mab0189
12611 silver badge1212 bronze badges
2
Believe it or not, actually i deleted everything playing with git commands... i want something completely different. if there exists. Something automatic upon file save.
– Shifras
Jun 16, 2022 at 21:35
Ok, i am not sure, but cwatch can be a linux solution to my question.
– Shifras
Jun 16, 2022 at 21:48
Add a comment
|
|
I have just accidentally deleted one week of coding source files, and even testdisk does not restore them. Even executable jars gone... I use ubuntu. I dont want that happen ever again. How to sufficiently and efficiently make automatic backups (clones) of selected critical files to a different location e.g. home?
I use java, and eclipse as IDE, but this could be any file i work with. E.g. i select certain file, because i can accidentally delete it, so this lightweight backup tool would automatically update it in saved backup location according to saved changes. So if it is lost in working directory, as in my case, i can just take it from backup site on local machine. Pls help. I feel devastated...
cwatch might be a solution i am looking for, but it is too complicated.
p.s. i am aware of question Script to perform a local backup of files stored in Google drive
google services not ok for me.
|
How to automatically-selectively backup critical files on edit?
|
Finally I could able to fix this and could able to write my solution and it’s worked.
##################### Updating Firewall rules from Soiurce DB server to Target DB server ##################
Write-Host -NoNewline "Updating Firewall rules from Soiurce DB server to Target DB server"
Get-AzMySqlFirewallRule -ResourceGroupName $ResourceGroupName -ServerName $SourceDBServerName | Select-Object Name, StartIPaddress, EndIPaddress | Convertto-Json | Out-File "file.json"
foreach ($entry in (Get-Content file.json -raw | ConvertFrom-Json)) {
New-AzMySqlFirewallRule -Name $entry.Name -ResourceGroupName $ResourceGroupName -ServerName $TargetDBServerName -EndIPAddress $entry.EndIPAddress -StartIPAddress $entry.StartIPAddress
}
|
I have restored Azure Database for MySQL single server using powershell script below.
Now, post restore the DB I had to copy all the firewall rules and other settings from connection security of Azure Database for MySQL single server manually.
After restore the DB I would like to automate copying the connection security configuration from source (Azure Database for MySQL single server) to the restored (Azure Database for MySQL single server) using powershell script. I couldn't able to figure it out how to automate this.
####################### Restore DB Server ####################### Write-Host "Restoring Azure DB for my SQL Server" $restorePointInTime
= (Get-Date).AddMinutes(-5) $DBServerbackupStatus=Get-AzMySqlServer -Name $SourceDBServerName -ResourceGroupName $ResourceGroupName | Restore-AzMySqlServer -Name $TargetDBServerName -ResourceGroupName
$ResourceGroupName -RestorePointInTime $restorePointInTime
-UsePointInTimeRestore start-sleep -s 60 Write-Host -NoNewline "DBServer Restore process is completed,please find the current status
below" $DBServerbackupStatus
|
How to copy connection security firewall rules,SSL,TLS from one azure DB for mysql to another azure DB for my sql server using powershell
|
0
A "snapshot" type of backup would only be necessary if you are running in NOARCHIVELOG mode, and you'd have to shutdown the entire database to do it as a "cold" backup (you can't get a logically consistent backup without transaction logs while the database is open for read/write activity). This would presumably impact your end-of-day process.
Assuming that the database is in ARCHIVELOG mode, and that you can run your backup as a "hot" backup while the database is up and running, you do not need to worry about the timing of your backup at all.
Run a backup whenever it makes sense based on system load or activity (being sure to backup the archive logs too), and if you need to recover from a backup later then recover to the exact point in time that you need - before or after your end-of-day process. See the documentation for Point in Time Recovery options: https://docs.oracle.com/en/database/oracle/oracle-database/19/bradv/rman-performing-flashback-dbpitr.html
The restore and recovery operation will restore from the backup and then re-apply all transactions to bring the database back to the desired point in time. The only thing the timing of the backup job would affect would be the number of transactions that might need to be re-applied after the data files are restored.
Share
Follow
answered Apr 30, 2022 at 16:28
pmdbapmdba
6,87222 gold badges77 silver badges1616 bronze badges
Add a comment
|
|
I need to take two backups before and after the day end process. If the EOD process starts at 10.00 p.m. The backup should contain all the data right at 10.00 p.m. before starting the EOD and the backup process should not impact the EOD process as well. Is there a way to achieve this?
Please note that I need to retrieve RMAN backups for disk and then tape.
|
Two Oracle RMAN Backup between an EOB
|
0
No way.
There is no ability to get count of a tablespace pages changed since the last full or incremental backup.
There is only a flag (TrackmodeState column in the db2pd -db mydb -tablesp output) available for whole tablespace.
Share
Follow
answered Apr 19, 2022 at 11:26
Mark BarinsteinMark Barinstein
12k22 gold badges99 silver badges1818 bronze badges
Add a comment
|
|
Db2 11.5.6.0 on AIX 7.1
Is there any way to guess size of full and incremental backup images?
|
Db2 11.5.6.0 on AIX 7.1 - How to guess size of full and incremental backup images?
|
If your current system is not critical - put it into NOARCHIVELOG MODE;
$ sqlplus / as sysdba
SQL> SHUTDOWN IMMEDIATE;
SQL> STARTUP MOUNT;
SQL> ALTER DATABASE NOARCHIVELOG;
SQL> ALTER DATABASE OPEN;
==============================================================
But if DB is important, then:
rman> backup archivelog all delete input;
And if your archive backup if full, then:
rman> delete noprompt obsolete;
P/s: backup setting is a broad topic, my suggestion is to save you temporarily!
|
I would like to know what these files are and how to clear them.
They've been using up quite a bit of my disk space.
ARC file
I have tried using RMAN to try and clear but I will get this error:
RMAN
Thanks in advance!
|
Oracle ARCXXXX.0001 files and how to clear them
|
0
Consider simple example
wget -r http://www.example.com
create directory www.example.com and place index.html inside it
wget -r -nH http://www.example.com
just places index.html in current working directory, that is you have one level less (top one) of catalog hierarchy when using -nH.
Speaking simply with -nH will not create catalog for each domain.
Share
Follow
answered Apr 5, 2022 at 14:47
DaweoDaweo
33k33 gold badges1515 silver badges2828 bronze badges
2
It's clear what -nH does. My question is, what does "add-hostdir" do?
– R B
Apr 6, 2022 at 15:33
@RB add-hostdir does make domain to be first element of path to each element
– Daweo
Apr 7, 2022 at 7:31
Add a comment
|
|
The wget 1 man pages describe the option "add-hostdir" as follows:
Enable/disable host-prefixed file names. ‘-nH’ disables it.
Unfortunately, I am too ignorant to understand this.
Since "-nH" disables it, it must have something to do with spanning hosts, but it isn't described there either.
Can someone explain what it does?
I am using wget 1.19.1
|
What is the purpose of the wget startup file option "add-hostdir"?
|
Thanks guys!
the program will get executed every first of the month and i managed it like that. that worked for me!
$tage =(Get-Date).adddays(-1).day +1 # value of days from month before
$dirbackup = $dirname + "BP"
get-childitem -Path "C:\Users\kaar\Desktop\Ordneralt\$dirname\ARCHIV" |
where-object {$_.LastWriteTime -lt (get-date).AddDays(-$tage)} |
move-item -destination "C:\Users\kaar\Desktop\Ordnerneu\Temp"
|
I have never worked with powershell and I only need a program that is always run at the beginning of the month
What should the program do?
At the beginning of the month it should move all data except for the last month to another folder and zip it there to save storage space.
to my question i have (-31 days) but not every month has 31 days how could i solve it or does it fit like that?
I'm sorry if I explained something wrong here, please let me know.
foreach ($kunde in (Get-ChildItem "C:\Users\kaar\Desktop\Ordneralt" -Exclude *.pdf , *.jpeg, *.png, *.gif ))
{
$dirname = $kunde.name
for ($i=0; $i -lt $dirname.length; $i++)
{
$dirbackup = $dirname + "BP"
get-childitem -Path "C:\Users\kaar\Desktop\Ordneralt\$dirname\ARCHIV" |
where-object {$_.LastWriteTime -lt (get-date).AddDays(-31)} |
move-item -destination "C:\Users\kaar\Desktop\Ordnerneu\$dirbackup"
}
$dirdate = get-date -format 'yyyy-MM-dd'
$backupname = $dirdate + "__" + $dirname
Compress-Archive -Path "C:\Users\kaar\Desktop\Ordnerneu\$dirbackup" -DestinationPath "C:\Users\kaar\Desktop\Ordnerneu\$dirbackup\$backupname"
Remove-Item "C:\Users\kaar\Desktop\Ordnerneu\$dirbackup" -Recurse -Include *.pdf -force
}
i was searching for ideas but i didnt find nothing
|
Move Files to a Backup Folder every first in the month and Zip the Files
|
0
To Disable SQL Server Managed Backup to Microsoft Azure for a specific database:
Connect to the Database Engine.
From the Standard bar, click New Query.
Copy and paste the following example into the query window and click Execute.
EXEC msdb.managed_backup.sp_backup_config_basic
@database_name = 'TestDB'
,@enable_backup = 0;
GO
Share
Follow
answered Mar 24, 2022 at 7:38
Utkarsh PalUtkarsh Pal
4,30011 gold badge66 silver badges1414 bronze badges
1
I have run the below 2 queries to check the existing settings. The first query returns no records and second query returns only 1 record with '0' value under first column "is_managed_backup_enabled". So it looks like Managed Backups are not enabled. SELECT * FROM msdb.managed_backup.fn_backup_db_config (NULL) SELECT * FROM msdb.managed_backup.fn_backup_instance_config ()
– miamikk
Mar 25, 2022 at 0:09
Add a comment
|
|
We got a 2019 SQL Server VM in Azure with both Azure Server backups and Azure SQL Database backups configured.
Azure SQL Database backups are configured as FULL backups on Sunday and DIFF backups the other 6 days but I noticed there are frequent (7 times a day) copy-only full backups running. These copy-only full backups only take 10-15 seconds but when they run, the I/O gets frozen and it's impacting the SQL performance during business hours.
I understand that as part of Azure SQL Server VM backup, it will trigger a copy-only full database backup (and we got VM Backups running at 10:00 pm, so the copy-only full backup at around 10:15 pm is related to this), but not sure what process is taking the other 6 copy-only full database backups (they run every 4 hours at 1:30 AM, 5:30 AM, 9:30AM, 1:30PM, 5:30PM, 9:30PM). Any ideas on where to look for in Azure configuration?
I have attached a screenshot of the backup history for one of the databases on the SQL Server. A normal FULL backup which runs on Sunday at 7:00 PM takes about 200 mins and DIFF backups which run the other 6 days at 7:00 pm take about 8-10 mins.
I would like to know what could be triggering these copy-only FULL backups so we can disable them. We don't have any other database backups configured (SQL Agent jobs or 3rd party tools like NetBackup, Veaam, CommVault etc.)
|
Frequent Copy-only full database backups running on SQL Server VM in Azure
|
0
You can use automatic backup combined with manual backup for Aurora PostgreSql database. For automatic backup, the max retention period is 35 days, and support any point in time restore and recovery. However, if you need a backup beyond the backup retention period (35 days), you can also take a snapshot of the data in your cluster volume.
If you use third-party tools, such as Veeam, it will also invoke AWS RDS snapshot API to take the backup, so the underly mechanism is the same.
You can also use the pg_dump utility for backing up the RDS for PostgreSQL database, and run pg_dump on read replica to minimize the performance impact to the primary database.
Share
Follow
answered Mar 24, 2022 at 10:43
NeilNeil
1
Add a comment
|
|
I would like to backup every single PostgreSql database of my AWS RDS Cluster (Aurora DB Engine). Are there some managed tools (like Veeam or N2WS) or best practices, how to backup and restore a single database or schema from AWS S3?
Many thanks
|
Backup and Restore AWS RDS Aurora cluster
|
Recreate the dump if you can, using the --skip-triggers option. You will have to recreat them afterwards.
https://dev.mysql.com/doc/refman/5.7/en/mysqldump.html#option_mysqldump_triggers
There is no option available to disable the triggers on import.
|
I have a large dump(~50GB) of MySQL database(Multiple databases backup). I am restoring it to a new Server which is taking a long time. I think it is taking this much time because it is huge data. the command ( gunzip < 1922-1648-329-75_all-databases.sql.20220305-0000.gz | MySQL -u test -p) also working fine and it started importing. But after some time, I'm getting an error called "Unknown column 'id' in 'OLD'". I have troubleshot and found that this error is coming from one of the triggers in the backup file.
I don't really need triggers on the new server. Is there a command-line option to use in MySQL which will allow me to skip the triggers while restoring the dump?
Below is the Restore Commane which I am using.
gunzip < 1922-1648-329-75_all-databases.sql.20220305-0000.gz | MySQL -u test -p
|
Skip triggers while restoring MySQL dump using Linux command
|
0
Goto Google Drive check folder whose name is UpdraftPlus. You can see all backup files. In WordPress Dashboard go to UpdraftPlus Backup Section see all backup lists.
Share
Follow
answered Mar 9, 2022 at 6:43
Monzur AlamMonzur Alam
53044 silver badges1111 bronze badges
2
Thanks for your prompt reply. The folder name in the settings is UpdraftPlus. I'm using the free version and in the free version, this is the default google drive folder name. The free version is not allowing to change of the folder name. In the below screenshot, you can also see I'm authenticated. Screenshot: prnt.sc/_iDvKKMqmMTm I can see all the backups created as per my desired schedule but backings are not going to GoogleDrive.
– Sajid Javed
Mar 9, 2022 at 7:49
You need to authenticate with Google. Please click Sign in With Google and provide credentials. After Authenticated shows your name like the screenshot. screenshot link - prnt.sc/f81QlBbR4abp
– Monzur Alam
Mar 9, 2022 at 11:09
Add a comment
|
|
I used updraftplus plugin for creating and storing the backup of everything in google drive. I have already linked my Google account with the plugin.
Inside WordPress, I can see the updraftplus is working perfectly. I can see some backups already created as per my desired schedule but I can't locate these backups in my google drive.
The logs generated by the plugin for the last backup is here: https://www.codepile.net/pile/VmrK6Oy7
|
UpdraftPlus WordPress plugin is not working to store Backup in the Google Drive
|
0
The problem with encrypting is:
1 - you need to decrypt in order to use the data in your application, which will have a performance impact.
2 - if you can decrypt, so can someone who has obtained the data illegally so it's not really that secure.
If you did want to go down this route, you could read in the data from the DB, decrypt and then cache it, which will speed the application.
The cache could last for several days based on the fact you said it only gets updated every few weeks/months and minimise DB calls
Share
Follow
answered Jan 26, 2022 at 12:49
Ben DBen D
76111 gold badge1212 silver badges2121 bronze badges
2
You added this as an answer but it should have been part of your question. Please remove the answer and put it into your question! :)
– S. ten Brinke
Jan 26, 2022 at 20:08
@S.tenBrinke - this is my answer to the question, hence why I added it as an answer.
– Ben D
Jan 27, 2022 at 11:35
Add a comment
|
|
I couldn't find proper topic so I'm creating this question.
I'm building an Desktop application that is running on data from MS SQL Database. Most of the data there is updated once a week/month, and most of the tables are read-only to end user. I figured, there is no need for the user to work directly on SQL Database online and in order to speed up the performance, I want the app to download necessary data from SQL Database on start, and then use localy saved data + in case of server outage user should be able to load app using latest saved data.
The thing is, data needs to be encrypted and secured from unauthorised use. I used to have SQLite database, but running on two databases doesn't feel efficient.
What solution would you suggest?
|
.NET - Best way to store encrypted offline data in Desktop Application
|
You need to concatenate the destination string:
$sourceFile = "C:\Test1\"
$destination = "C:\Backup"
copy-item $sourceFile -destination ($destination + "\server-backup-" + (Get-Date).ToString("yyyy_MM_dd_hh_mm_ss")) -Recurse
I have slightly adjusted your get-date as I am not sure on the output of -format on get-date.
If you have any spaces in the path then this is considered as a different parameter.
To avoid this you can use quotes to encapsulate the string, however you cannot execute functions, parenthesis like I have used and concatenating the string is another approach.
|
I am new to Powershell and trying to see if I can copy a folder from a Test folder then put it on a Backup folder and rename the folder to the date it was done.
$sourceFile = "C:\Test1\"
$destination = "C:\Backup"
copy-item $sourceFile -destination $destination .\server-backup-$(Get-Date -format "yyyy_MM_dd_hh_mm_ss") -Recurse
However, I keep getting an error saying cannot be found that accepts arguments.
Copy-Item : A positional parameter cannot be found that accepts argument '.\server-backup-2022_01_20_09_32_27'.
At line:5 char:2
+ copy-item $sourceFile -destination $destination .\server-backup-$(Ge ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidArgument: (:) [Copy-Item], ParameterBindingException
+ FullyQualifiedErrorId : PositionalParameterNotFound,Microsoft.PowerShell.Commands.CopyItemCommand
Is there a better way of going about this or can this error be easily fixed?
|
Powershell - Error message for trying to add a date to the end of a copy of a folder
|
0
Sparsity information is kind of redundant. You can determine whether some parts of a file should be sparse by checking whether those parts only contain zeros.
head -c $(( 1024 * 1024 )) /dev/urandom > foo
head -c $(( 1024 * 1024 )) /dev/zero >> foo
head -c $(( 1024 * 1024 )) /dev/urandom >> foo
stat foo
Size: 3145728 Blocks: 6144
fallocate --dig-holes foo
stat foo
Size: 3145728 Blocks: 4096
As you can see from the block count, making it sparse was successful, and all those block that were completely zeroed out have been successfully removed.
Share
Follow
answered Mar 29, 2022 at 19:39
mxmlnknmxmlnkn
1,92711 gold badge1919 silver badges2828 bronze badges
Add a comment
|
|
I created a backup for a file then compressed it and store it using tar.
At the time I didn't know it was a sparse file, So I didn't use the -S flag.
Now I am trying to retrieve the data, but I can't since when I extract I get a non sparse file.
Is their a way to retrieve that info, or is it lost for good ?
Thanks in advance.
|
retrieve sparse file from tar created without -S flag
|
0
There is one way to solve this problem
I was also suffering from this problem.
but I found that how to use "BATCH" file
There are mainly 2 command
X_COPY
ROBO_COPY
According to your need here, (2)robo_copy will be helpfull
robocopy will backup your specific file or folder
even if you changed some megabytes data, it will only copy the new data
only your new blocks will be copy.
HOW TO DO
Open NotePad and type
robocopy "file path you want to copy" "on which folder you want to paste" /MIR
And then save your notepad as ".bat" file
for more help check out
https://www.geeksforgeeks.org/what-is-robocopy-in-windows/
Share
Follow
answered Jan 26, 2022 at 9:22
Arham SanghviArham Sanghvi
1122 bronze badges
Add a comment
|
|
I have multiple copies of the same database with size of several terabytes. I was looking for a solution where I could upload the very first backup and then, instead of uploading the same entire backup with only few megabytes of changes, only upload the blocks that have changed. I know this process is called deduplication, so I was wondering if there is a software that does that, possibly to be a built-in nas-management software solution, like openmediavault.
|
Deduplication solution for multiple offline database backups
|
Write a driver script that provides input to the script via a here-document.
#!/bin/bash
/path/to/backup_script <<EOF
backup
ext1~ext2~ext3
EOF
Then put this driver script in crontab.
|
I need to add this script to cron. For example, make a backup every week. But I don't know how to pass values. The script doesn`t have any arguments, bat values read from the console. Can you help me, please?
#!/bin/bash
read -p "Write what you want or -h to know how script works: " command
if [ $command = "backup" ]
then
IFS="~"
read -p "Write extensions of file to backup: " exts
...
```
|
Cron in bash (linux)
|
0
I use Viper FTP for Mac for a similar task. There you can define what they call an "observed folder" - a folder that is watched by ViperFTP and if any modifications are detected, new / modified files are uploaded to the defined server (s). The app also supports Google Drive.
Share
Follow
answered Jan 14, 2022 at 4:02
stapozstapoz
1
Add a comment
|
|
I want to synchronize some local folders from my desktop to my Google Drive account. I have to mention that I have more than 2 million files totalizing 1 To and file sizes are from 1o to 100 Go (a zip archive).
Using drive for desktop, the google application for the synchronization, takes years since each time this app is opened, it is checking all the files. Considering the number of files I have, you understand that it is quite long. Additionally, I have the feeling that only 3 files can be simultaneously uploaded on the drive with this "Google Drive for desktop" app.
I am looking for an alternative solution that would allow me to save my local folder in a "mirror" way. I mean that, when modifications are performed in my local folder on my computer, they are pushed to my google drive in real-time.
Do you know about such a free software I could use, that would not take years in an infinite checking loop before synchronizing my files?
|
Mirror synchronization of local folders with google drive
|
0
You can try aws s3 backup repository, this way you can snapshot/restore indices between clusters that use same s3 repository
Share
Follow
answered Dec 30, 2021 at 10:48
Ali Can SaykalAli Can Saykal
311 bronze badge
Add a comment
|
|
we have two 6 nodes es cluster . A cluser and b cluster,two self-manager cluster on our machine
how can i backup between A and B
A cluster is our product environment cluster , B is our backup environment cluster .
es-version is 7.6.2, cluster nodes num is 6
cluster machine is 64G ,disk is 6Tb, and the data is 5TB
tThe backup cycle is once a month
|
how to make backup between two elastic cluster
|
0
I think there is something easier to transfer a file from your VM to your local machine. Also that you are using SSH
scp -i /path/your/targz/file user@ip:/path/to/local/machine
Check permission while doing that using
ls -lh
Share
Follow
answered Dec 8, 2021 at 17:14
Joseph HaniJoseph Hani
111 bronze badge
1
I checked and thanks for the code - it works. I was hoping to create the tar.gz file and transfer it in one step. Is that possible? Or would I have to create the tar.gz file and store it locally and then transfer it using scp and then delete the local file?
– Kevin
Dec 8, 2021 at 18:40
Add a comment
|
|
I have a Raspberry pi running on my home network along with a Macintosh that acts as a backup server. I am attempting to craft a Python3 script that will run on the Pi to create a tar.gz backup of the Pi's home folder (along with all of the contents) and transfer it to the Macintosh using SSH. I have the SSH connection (using keys) running but am stumped when I try to create the backup file and transfer it.
My code so far:
#!/usr/bin/python3
import paramiko
import tarfile
import os
ssh_client = paramiko.SSHClient()
ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh_client.connnect(hostname = '192.168.1.151', username = 'usernamehere')
print("I'm connected!")
#The following statement is the problem as I see it
tar czvf - ./home/PiOne/ | ssh_client -w:gz "cat > ./Volumes/PiBackups/PiOne/PiOneClone.tar.gz"
print("File transferred!")
ssh_client.close
I would appreciate any help in creating the script!
|
Create a tar.gz file and transfer it to network server
|
0
Solved my problem by reading the manual a little bit more on rsync and ssh.
Generated ssh key on client ssh-keygen
Copied to host ssh-copy-id user@host
Modified cron job 0 2 * * 7 /usr/bin/rsync -av --delete user@ip:/mnt/driveuid/share/ /mnt/backup/ --log-file=/var/log/rsyncbackup.log
Now if my computer can't connect to the host the job doesn't run.
Share
Follow
answered Dec 2, 2021 at 3:11
spm2600spm2600
122 bronze badges
Add a comment
|
|
I'm using fstab to mount a samba share on boot
//ip/share /mnt/share cifs credentials=/home/user/.smbcredentials,uid=user 0 0
and scheduled rsync via cron job to copy the contents to a local drive once a week
0 2 * * 7 /usr/bin/rsync -av --delete /mnt/share/ /mnt/backup/ --log-file=/var/log/rsyncbackup.log
The thought came to mind if the host was unavailable /mnt/share would be empty- if the cron job ran it'd wipe all the data on my local backup mount because of the difference and --delete flag. I want to keep that as I want a clone of my share.
I'm relatively new with Linux and curious what approach might add a safeguard to this. could I run "ls" to check for content, if present continue? Otherwise what would ensure I don't inadvertently delete everything on my backup mount?
|
Cron job using pre-mounted samba share, not sure if safe...?
|
0
Doing something like this is going to be quite tricky, I think that the closest you will come to doing this will be to create a new project and then try to use the open-source migration tools at https://marketplace.visualstudio.com/items?itemName=nkdagility.vsts-sync-migration. We used an earlier version of this for an Azure DevOps carveout from another tenant, and it worked well, but we didn't have your "restore-to-point-in-time" use case.
I think that what you're going to need to do is use the ReplayRevisions on the WorkItemMigration class, and then you're probably going to need to write some custom WIQL to get only what you're looking for. It's even conceivable you might need to extend this to get the functionality you want.
Share
Follow
answered Nov 18, 2021 at 13:23
WaitingForGuacamoleWaitingForGuacamole
3,98811 gold badge1010 silver badges2525 bronze badges
2
1
Thanks for the comment. Will try it with our IT. It seems like it's not resolving my issue (but it may help in another ;)
– Michał Jakubas
Nov 19, 2021 at 15:34
Ultimately, I think that the code in that project may wind up being more of a guide for you to figure out how to play forward history (if it even is capable of doing that).
– WaitingForGuacamole
Nov 22, 2021 at 14:07
Add a comment
|
|
I want to restore the state my project was in 11/10/2021 into another temporary project (not the one I am currently using), so I can only grasp the order of work items from Backlogs from Boards for that day. I did not delete the project. I just changed the Area Paths for Teams and the order of work items changed. I just want to have a reference in a separate temporary project, so I can compare work items order between them and restore correct one to the actual backlog.
|
Restore the state my project was in 11/10/2021 into another temporary project, so I can only get the order of work items from Boards from that day
|
0
Are you using the macros or the functions?
save() just saves the current file (same as Ctrl-S)
save_as_dialog() will bring up a dialogue to save the file
You can put them in buttons from the macro menus or you can define a functions and then add that function to a macro button.
Share
Follow
answered Jan 20, 2022 at 12:54
RegretfulRegretful
33522 silver badges1010 bronze badges
Add a comment
|
|
I am working on the script on NEdit with LINUX environment.
But I can't reach one of the funciton of the script.
*I would like to add "save" file function at the script, but it seems not possible.
I checked the NEdit Help document, it mentions the "Make Backup Copy" function is UNIX only.
Not sure anyone has experience on it? -> To automatically save/backup the file after you call the script with LINUX environment.
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
Make Backup Copy
On Save, write a backup copy of the file as it existed before the Save command with the extension .bck (Unix only).
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
https://www.doc.ic.ac.uk/lab/labman/nedit/n6.html
|
NEdit_How to use the Shell on NEdit to save the file - LINUX base
|
0
No, there isn't a way to load/unload multiple tables in a single DSBulk execution because it doesn't make sense to do so.
In any case, using unloading data to CSV isn't recommended as a means of backing up your cluster because there are no guarantees that the data will be consistent at a point in time.
The correct way of backing up a Cassandra cluster is using the nodetool snapshot command. For details, see Apache Cassandra Backups.
If you're interested, there is an open-source tool which allows you to automate backups -- https://github.com/thelastpickle/cassandra-medusa. Cheers!
Share
Follow
answered Nov 15, 2021 at 2:56
Erick RamirezErick Ramirez
14.8k22 gold badges1919 silver badges2424 bronze badges
Add a comment
|
|
I use dsbulk for text based backup and restore of cassandra cluster. I have created a python script that backsup/restores the all the tables in cassandra cluster using dsbulk load/unload but it takes long time even for less data due to new session created for each table (approx 7s), In my case I have 70 tables, so 70*7s is added due to session creation. Is there a way to backup data from all tables in a cluster using a single session using dsbulk? From the docs, I see dsbulk is suitable only for single table load/unload at a time. Is there any alternative or other approach for this? Please suggest if any..!
Thanks..
|
Using DSBulk for backup/restore takes too long
|
Contact the author of the program. His name is in the popup dialog that you show as picture002. Also please post text of what is on the pictures in the messages so people can see what you're asking without clicking links.
|
How to write parameter t, then i can use its volume shadow function?
picture 001
Look this note, "-c" is to no use of the volume shadow copy.But,how to use?there is no words about it.I guess,don't write "-c",means to use this function?
But,no matter how to use these parameters,the following popup program window,that small “use volume shadow copy" box,is always not chosen...
picture002
So,please tell me,how to write the parameters,then I can use the "volume shadow copy" function to create the backup file?
|
about the commandline mode of disk2vhd
|
0
Ups, nevermind. Rsync it's a fantastic tool!
You can combine both --ignore-existing and --delete to achieve exactly what I'm asking for: add new files, delete no longer existing but do not update.
$mkdir folder1 folder2
$touch folder1/sample1 folder1/sample2
$rsync -a --ignore-existing --delete folder1/ folder2/
$ls folder2/
sample1 sample2
$echo "this is an update" > folder1/sample2
$cat folder1/sample2
this is an update
$rm folder1/sample1
$touch folder1/sample3
$rsync -a --ignore-existing --delete folder1/ folder2/
$ls folder2
sample2 sample3 -- (sample1 deleted)
$cat folder2/sample2
(empty sample2 has not been updated)
Share
Follow
answered Nov 4, 2021 at 19:10
Gonzalo CaoGonzalo Cao
2,35611 gold badge2424 silver badges2020 bronze badges
Add a comment
|
|
Currently I'm using rysnc to create a remote backup of certain folders and subfolders and I would like to achieve a very particular behaviour. I want to add the new files, remove the ones that no longer exists but never update an existing file.
It's something related to security, we add new files to those folders and remove some from time to time but NEVER change the contents of existing files. We would like to keep it as it was created.
This is my current rsync command (part of a bash script)
rsync -a --delete /srv/backup/ xxx@xxxx:~/backups/
I've seen the --ignore-existing option but this disables the --delete option
|
rsync params to create, delete but not UPDATE files
|
0
Onenote 2016 ("old" onenote which comes with office 365 but is also available for free) has a built in backuptool which can make a local backup every X hours/days.
You only have to make sure Onenote is always open.
Share
Follow
edited Nov 5, 2021 at 13:19
ouflak
2,5081010 gold badges4545 silver badges5151 bronze badges
answered Nov 5, 2021 at 9:04
FelixFelix
1
Add a comment
|
|
I am looking for an automation with Microsoft Power Automate or other tool, which automatically creates a local backup of a OneNote Notebook or of the Onedrive folder in which the OneNote notebook is inside in a certain interval.
|
OneNote local backup
|
Ok I just found out, there is NO WAY do downgrade or get the installer officialy, but there is sites like archive.org that have the old installer there.
If anyone is having trouble because of this problem I suggest to look there.
|
I have a client that backed up his data with MEB 4.0.0, so now I need to extract the contents of the image, but because I'm using mysql Enterprise Backup 8.0 I can't since it's not compatible.
Is there a way to downgrade my mysqlbackup? Or maybe upgrade his Image? Right now it's in .MBI so I don't see a way out of this since I can't extract and can't find the older version of the mysqlbackup anywhere.
I'm using Windows 10 btw, any help would be appreciated!
|
Is there a way to downgrade mysqlbackup?
|
Microsoft Azure do not support customer-initiated database backup in Azure SQL Database. Backups occur automatically. This is why you do not see the Backup option in SSMS.
The system does it automatically as I mentioned above. The Backups page in the portal lets you configure retention for existing backups that the system creates.
Please refer the following article.https://learn.microsoft.com/en-us/azure/azure-sql/database/automated-backups-overview?tabs=single-database
|
I want to create DB backup, but at the azure portal, I don't see the "Manage backups" option. Also at the MsSQL management studio, when right-clicking the database and goes "task" I don't see the "Back Up" option.
So can it depend on Azure subscription type?
Thank you.
|
Create database backup at Azure portal
|
0
It is the content parameter you're looking for. Data Pump Export is, basically, what you should study.
CONTENT=[ALL | DATA_ONLY | METADATA_ONLY]
ALL unloads both data and metadata. This is the default.
DATA_ONLY unloads only table row data; no database object definitions are unloaded.
METADATA_ONLY unloads only database object definitions; no table row data is unloaded (...)
Default is ALL.
Share
Follow
answered Oct 1, 2021 at 6:15
LittlefootLittlefoot
138k1515 gold badges3939 silver badges5959 bronze badges
Add a comment
|
|
I have been asked to make a backup of type export: full or schema and default, structure and only data in oracle.
I found these example commands to backup database and schema
expdp scott/tiger@db10g schemas=SCOTT directory=TEST_DIR dumpfile=SCOTT.dmp logfile=expdpSCOTT.log
expdp system/password@db10g full=Y directory=TEST_DIR dumpfile=DB10G.dmp logfile=expdpDB10G.log
What options should I add to make a backup with data and without data?
What is a default backup?
|
How to do a backup in oracle with data and without data?
|
0
I found something on Microsoft techcommunity posted by Azure DB support team which may help you:
https://techcommunity.microsoft.com/t5/azure-database-support-blog/exporting-a-database-that-is-was-used-as-sql-data-sync-metadata/ba-p/369062
You can also consider Export to a BACPAC file - Azure SQL Database and Azure SQL Managed Instance.
Share
Follow
answered Sep 23, 2021 at 8:37
Utkarsh PalUtkarsh Pal
4,30011 gold badge66 silver badges1414 bronze badges
Add a comment
|
|
I am trying to create a backup of a SQL Azure database with Data tier application wizard of SQL management Studio but I obtain a lot of errors like the following:
One or more unsopported elements were found in the schema used as part of a data package error sql 71501....
Any hint on how to solve this error?
|
Error while creating backup of SQL Azure with Data Tier Application Wizard
|
0
Looking at the Register API for Protection Containers, it looks like the supported values for OperationType are: Invalid, Register and Reregister. The Unregister API fires a HTTP DELETE request that is not straightforward to simulate with an ARM template. ARM Templates are primarily meant to be used for creating and managing your Azure resources as an IaC solution.
That said, if you have ARM templates as your only option, you could try deploying it in Complete mode. In complete mode, Resource Manager deletes resources that exist in the resource group but aren't specified in the template.
To deploy a template in Complete mode, you'd have to set it explicitly using the Mode parameter since the default mode is incremental. Be sure to use the what-if operation before deploying a template in complete mode to avoid unintentionally deleting resources.
Share
Follow
answered Sep 30, 2021 at 10:29
Bhargavi AnnadevaraBhargavi Annadevara
5,17222 gold badges1515 silver badges3434 bronze badges
Add a comment
|
|
I have performed discovery operations for listing protectable items in Azure Backup: 'SQL in Azure VM'.
I am able to perform 'Disovery' using the following template
"resources": [
{
"type": "Microsoft.RecoveryServices/vaults/backupFabrics/protectionContainers",
"apiVersion": "2016-12-01",
"name": "[concat(parameters('vaultName'), '/', parameters('fabricName'), '/',parameters('protectionContainers')[copyIndex()])]",
"properties": {
"backupManagementType": "[parameters('backupManagementType')]",
"workloadType": "[parameters('workloadType')]",
"containerType": "[parameters('protectionContainerTypes')[copyIndex()]]",
"sourceResourceId": "[parameters('sourceResourceIds')[copyIndex()]]",
"operationType": "Register"
},
"copy": {
"name": "protectionContainersCopy",
"count": "[length(parameters('protectionContainers'))]"
}
}
]
I similarly tried the following operation types:
"Reregister": Works as expected.
"Invalid: Did not perform any operation.
Could someone guide me with unregistering of containers using the ARM template?
(I already have the API to do it, but I need it with an ARM template).
Similarly is there any way to rediscover DBs within a registered container using an ARM template?
Any help is much appriceiated.
|
Azure SQL Server Backup: Need to UnRegister Containers and Redisover DBs in Azure SQL Server Backup using ARM templates
|
0
You can always make a script of your database. This will not only take lesser space but will also give you a lot of options such as different versions of SQL Server, Schema only script, schema and Data script, triggers with script.
All you have to do is
Right Click on DB > Tasks > Generate Scripts...
Then follow the wizard to this screen
In advanced select the options according to your requirements.
Run the script in your desired server to restore everything. Booom!!
Share
Follow
answered Sep 2, 2021 at 11:58
Bilal Bin ZiaBilal Bin Zia
65444 silver badges1212 bronze badges
2
Thanks but i need backup with data. Your query only restore the database without data.
– jrz.soft.mx
Sep 6, 2021 at 19:46
You see the "Type of Data:"? It's default is "Schema only" you can change it to "Schema and Data" to get backup script with data
– Bilal Bin Zia
Sep 13, 2021 at 5:01
Add a comment
|
|
I need update backup and restore module on my app because it is consuming lot disk space.
The module run query for backup but not work in express editions of SQL Server. I need fix the module and make backup with compression in any SQL version, for not reinstall the actually SQL instances.
SQL backup query with compression
BACKUP DATABASE [MyDataBase] TO DISK = N'C:\backup\MyDataBase20210829T213904.bak' WITH NOFORMAT, NOINIT, NAME = N'MyDataBase-Full Database Backup', SKIP, NOREWIND, NOUNLOAD, COMPRESSION, STATS = 10
SQL error
Msg 1844, Level 16, State 1, Line 1
BACKUP DATABASE WITH COMPRESSION is not supported on Express Edition
|
Backup and Restore SQL Server Database With Compression When comprehension is not supported
|
I figured it out. I kind of followed https://unix.stackexchange.com/questions/129322/file-missing-after-fsck but
I copied the partition to an external drive using dd. Then mounted the external drive (which just worked even though I could not mount the original ubuntu partition). Then I went into the lost+found folder on the partition and used "find" to search for a file I know I had in my home folder and it found that file. I am not able to access all my documents etc.
|
I have two linux partitions on my laptop (one ubuntu and one garuda). Ubuntu was giving me problems so I installed Garuda to check it out. The Garuda partition filled up so I used KDE partition manager to shrink the ubuntu partition so I could expand the Garuda.
Then, Ubuntu wouldn't mount and would not boot as it said the fs was wrong size. I ran fsck on the partition and hit yes to pretty much everything. This included force rewriting blocks it said it couldn't reach and removing inodes, etc. Probably a mistake in hindsight.
Now, I got a external hard drive and cloned the Ubuntu partition using "sudo dd if=/dev/nvme0n1p5 of=/dev/sda1 conv=noerror,sync". The external hard drive mounted without problem but it does not have /home/ folder, only folders such as /etc/.
I don't think there's many files I cant get back from a git repo, but it would be nice to have access to the /home folder so I can grab everything, remove the ubuntu partition, and resize garuda.
Thanks in advance!
|
How to recover home folder? Cloned partition recovered other directories like /etc/ but not /home
|
0
Simple, but ridiculously hard to find with Google if you don't know what to search for.
There are .vhdx files in the WindowsImageBackup folder
Find the .vhdx that represents the disk you want to mount/explore
Attach the .vhdx with diskpart or Disk Management
Assign a drive letter
Browse with File Explorer
Caveat: You might have to open a DOS cmd prompt in Admin mode and use diskpart to assign the drive a letter.
Share
Follow
answered Aug 27, 2021 at 17:16
MikeZMikeZ
1,21511 gold badge1414 silver badges2121 bronze badges
Add a comment
|
|
I have a Windows 10 image backup from about 6 months ago that contains some code I need to recover.
The image backup was made on my prior PC, which has since suffered a catastrophic hard drive failure and was reimaged with Windows 10.
Is it possible to browse the old image backup and find and extract the folder(s) I need?
|
Browse & recover specific files from Windows 10 image backup?
|
0
Is this manual backup or automatic backup? Azure vm or Azure sql db? What kind of backups? – If I cancel a backup job after it starts, is the transferred backup data deleted?
No. All data that was transferred into the vault before the backup job was canceled remains in the vault.
Azure Backup uses a checkpoint mechanism to occasionally add checkpoints to the backup data during the backup.
Because there are checkpoints in the backup data, the next backup process can validate the integrity of the files.
The next backup job will be incremental to the data previously backed up. Incremental backups only transfer new or changed data, which equates to better utilization of bandwidth.
If you cancel a backup job for an Azure VM, any transferred data is ignored. The next backup job transfers incremental data from the last successful backup job.
This article answers common questions about the Azure Backup service: https://learn.microsoft.com/en-us/azure/backup/backup-azure-backup-faq
Share
Follow
answered Aug 25, 2021 at 8:26
SumanthMarigowda-MSFTSumanthMarigowda-MSFT
2,16011 gold badge99 silver badges1515 bronze badges
Add a comment
|
|
I have a weekly incremental backup with Azure Backup.
What happens to this backup if the previous backup was not completed at the start of the backup? Twice
(Will this backup start after it is interrupted and the previous backup is complete, or will it be aborted and run?)
Regards,
|
About Azure Backup
|
0
The link which you shared is for bacula-enterprise, we are using bacula-community. so any related document you prefer for bacula-community edition
Share
Follow
answered Aug 17, 2021 at 13:04
KrithiKrithi
122 bronze badges
Add a comment
|
|
I am using centOS-7 machine, bacula community-edition 11.0.5 and PostgreSql Database
Bacula is used to take full and incremental backup
I followed bellow document link to store the backup on an Amazon S3 bucket.
https://www.bacula.lat/community/bacula-storage-in-any-cloud-with-rclone-and-rclone-changer/?lang=en
I configured storage daemon as they shown in the above link, once after the backup, backup is success and backed up file storing in the given path /mnt/vtapes/tapes, but backup-file is not moving from /mnt/vtapes/tapes to AWS s3 bucket.
In the above document mentioned as, we need to create Schedule routines to the cloud to move backup file from /mnt/vtapes/tapes to Amazon S3 bucket.
**I am not aware of what is cloud Schedule routines in AWS, whether it is any cloud lambda function or something else?
Is there any S3 cloud driver which support bacula backup or any other way to store bacula-community backup file on Amazon S3 other than S3FS-Fuse and libs3 ?
|
How to store bacula (community-edition) backup on Amazon S3?
|
0
AFAIK, there's no way to stop it.
However, in order to solve your problem, you can use mv dump.rdb back.rdb to automatically move the dump file before copying.
Share
Follow
answered Jul 27, 2021 at 9:32
for_stackfor_stack
21.7k44 gold badges3636 silver badges5050 bronze badges
Add a comment
|
|
I want to stop bgsave (if exists) before copy dump.rdb file for backup.
My problem is sometimes during cp the dump.rdb file an ongoing bgsave gets completed and dump.rdb file gets updated
|
How to stop a BGSAVE in redis?
|
0
See comments, actually this was not a problem but misundertanding the role of graphs in a repository
Share
Follow
answered Jul 24, 2021 at 9:30
OkileleOkilele
8511 silver badge66 bronze badges
Add a comment
|
|
I am using GraphDb and have a data update problem :
Data in the repository is coming from 2 sources :
Million of tripples are coming from an external source and updated by a full replace each week
Thousands of tripples are created by users and are permanent. They use the same ontomogy as the external source and are stored in the same repository so that SPARQL queries can run on both data without any difference. However a simple SPARQL query can retrieve all users tripples.
The problem is about the weeky update of the external source.
My first idea was to
Export users data
Import with replace the new external dataset
Reimport users data
Problem : I need to reimport exported data, imports are in RDF format which is not available in export.
Another way (which is about the same):
Import the weekly update in a new repository
Copy users data from the 'old' repo to the new one
Switch the server to the new repo.
Problem : In order to copy users data I need an "INSERT SELECT" SPARQL statement using services which exists in SQL (without services) but not in SPARQL
At last GraphDB Ontorefine should do the work but not efficiently on a weekly base.
Another way could be to store users data in a separate repo but SPARQL queries involving sorting could become hard to maintain and slow to run.
I can also export users data in JSON format and programmatically generate RDF/XLM files and send them to the GraphDB API. This is technically possible, I do it in very special cases and this works fine, but not reliable for a big amount of data, slow, and a big developer work.
In short: I am stuck!
|
GraphDB export/import or SPARQL to transfer data from a repository to another
|
0
Looks like the 3rd party tools mentioned by you should be the best fit, especially Velero because as per this post:
Velero is a backup tool not only focused on volumes backups, it also
allows you to backup all your cluster (pods, services, volumes,…) with
a sorting system by labels or Kubernetes objects.
Stash is a tool only focused on volume backups.
To get more information on using Velero and its newest features you can visit the official documentation site and this website.
Share
Follow
answered Jul 22, 2021 at 12:34
Jakub SiemaszkoJakub Siemaszko
70433 silver badges88 bronze badges
1
Have checked Velero, the doc suggests it only support cloud storage providers. I'm looking for to get local cold backup, and not using cloud storage.
– FrozenBeef
Jul 23, 2021 at 13:54
Add a comment
|
|
I'm looking for to create a local backup for PV/PVC in K8s, then restore. (Not using any CSI)
Have tried VolumeSnapshot in k8s, but it creates a in-cluster backup, and what I need is a local copy, so I can archive it and move around. Also found some 3p tools like Stash/Velero/Kasten, but not sure if any of them fits my target.
Can someone point me to the correct document to look at, or if that's all possible? Thanks!
|
Backup Kubernetes PV/PVC to Local Disk w/o using CSI?
|
!! Problem Solved !!
Found out about rrsync --- /usr/share/doc/rsync/scripts/rrsync, copy it to wherever.
ServerA:authorized_keys --- command="sudo /usr/local/bin/rrsync -ro /backup"
Since I'm keeping a copy of the backups on ServerA, I might as well rsync from them instead of using rsnapshot on ServerB. (This was my initial idea, but it doesn't work since there are duplicate files because of links that rsnapshot creates, I ended up having rsnapshot running both on ServerA and ServerB, to save backups from ServerA to a localDir on ServerA and also make remote snapshots from ServerA to ServerB.)
Also changed the sudoers file on ServerA:authorized_keys --- command="sudo /usr/local/bin/rrsync -ro /backup"0:
ServerA:authorized_keys --- command="sudo /usr/local/bin/rrsync -ro /backup"1
ServerA:authorized_keys --- command="sudo /usr/local/bin/rrsync -ro /backup"2
Now works as expected.
Note that the path on ServerA:authorized_keys --- command="sudo /usr/local/bin/rrsync -ro /backup"3 in the command above is relative to the rule set in authorized_keys.
|
I'm trying to make a rootfs backup from ServerA on to ServerB.
The connection is one way and is initialized from ServerB using rsnapshot.
I have made a backup account on ServerA and enabled paswordless sudo only for rsync
What I'm trying to accomplish:
Change the authorized_keys file on ServerA, so only the rsync command can be used via ssh.
On ServerB0 - ServerB1 is setup to run ServerB2 with the following args:
ServerB3
I have tried the following on ServerB4:
ServerB5
But ServerB6 keeps crashing and giving IO error codes for ServerB7.
What am I missing here?
|
Hardening authorized_keys used in rsync backup
|
0
I understand that you want safety and performance for your Hyper-V VM backups. Backup and restore is a stressful experience. As you mentioned the solutions use the Hyper-V checkpoint technologies and I don't know something else.
We tested a lot backup tools and end up with Veeam. Usually the backup and restore works. Unfortunately it put a lot weight on the infrastructure during backup (storage is slow..) and sometimes the backups fails because of this. To evade this we setup fixed periods outside the work time. Keep in mind that we use the Backup only for server VMs (not VDI).
I would recommend Veeam as backup solution, but maybe you can take a look at Commvault.(https://documentation.commvault.com/commvault/v11/article?p=31493.htm)
Greetings.
Share
Follow
answered Jul 2, 2021 at 11:33
AzureBaumAzureBaum
26122 silver badges99 bronze badges
Add a comment
|
|
i'm looking for an alternative for Hyperoo, one of the best backup solutions for VMs backup..
I tried many softwares, like Veem, Iperius, Altaro, Acronis ecc but everyone use Microsoft checkpoints and create AVHDX files, sometimes it happens that the backup has some problems and the avhdx remains open, I find myself forced to merge that punt hoping everything goes well.
All these programs make a false incrementally backup.
Even with every small modification the vhdx changes a little. The backup program checks that the virtual machine has changed and makes a full backup.
Hyperoo creates one full vhdx file and then many rollback files, one file each day.
|
Backup Hyper-V VMs
|
0
Well, the first problem is that you’re trying to execute -s rather than ln -s.
What is the goal of your ln operation? To symlink the folder called Old_Backup inside the MobileSync directory? That won’t work if your intention is that additional backups go to Old_Backup. You should symlink the Old_Backup directory to the location and name it originally had.
Share
Follow
answered Jun 28, 2021 at 13:05
David NedrowDavid Nedrow
1,11811 gold badge1010 silver badges2626 bronze badges
Add a comment
|
|
When I try to move my iOS backup folder (which does not yet have any backups) to my external hdd, the command line (on Mac) tells me that the command -s is not found.
This was the directory which I've tried to link the iTunes backups to:
user123@user123s-MacBook-Pro ~ % -s /Volumes/Personal/user123/iOSBackup/Old_Backup/ ~/Library/Application\ Support/MobileSync
zsh: command not found: -s
when I entered it manually instead of copy pasting it, it said that permission was denied, even though I had granted full disk access to the terminal app before for coding in vs code and such...
Thanks!
|
Mac: zsh: command not found: -s / (failed: redirect iOS backups to external hdd)
|
0
A simple file system backup that just copies the files without getting PostgreSQL into backup mode first (see the documentation) is not a valid backup that you can expect to be able to restore from.
If you get your database to start somehow (e.g, by using pg_resetwal, which will destroy some data), you will likely have some data corruption. It is probably a good idea to hire a professional who can help you salvage some of your data.
Share
Follow
answered Jun 29, 2021 at 0:33
Laurenz AlbeLaurenz Albe
225k1818 gold badges234234 silver badges303303 bronze badges
Add a comment
|
|
my Mac unfortunately crashed. I have a Time Machine backup.
Using the Migration Assistant on a new Mac allowed me to get all my documents back, but unfortunately Postgres is not able to start.
So I was trying to manually move my Postgres db from the backup to the freshly installed OSX.
Here are the steps I followed:
installed a fresh copy of Postgres (v11), downloaded from https://www.enterprisedb.com/downloads/postgres-postgresql-downloads
replaced the /Library/PostgreSQL/11/data folder with the one I dig out from my dead Mac, where I assume my data is stored
checked/fixed owner of folders (must be 'postgres', I assume)
started pgAdmin
entered the PW for postgres user
clicked on "PostgreSQL 11" server to show databases
This is where I have problems. I am prompted with the same pop-up asking me to enter the password for the user 'postgres'. I type it and I get the following:
Putting back the original data folder allows the server to start properly. So I think there's a problem with my backed-up data folder.
Thank you for your help.
|
OSX: migrating Postgres db from one Mac to another
|
0
You are working with a dangerous setup since you seem to be betting on redo log files that are never filled up between your backups. When your data has no value, go ahead, otherwise switch to archive log mode.
Archives are created when a redo log group fills up. So, in your case you need to copy the online redo log files manually to the remote site for recovery.
How sure are you about the redo log files not being overwritten?
Be sensible and if this is production switch to archive log mode. Otherwise, promise not te make promises about being able to make point in time recoveries.
An other point: if your online redo log files are damaged, your database has a big problem and in your case you might loose a day worth of work. Is that OK? If not, reduce the size of the redo log files to a limit where it does make a switch every now and then. I am sure your company has an idea about how much time they can accept loosing transactions from. Many companies allow less than one hour transaction loss.
Share
Follow
edited Jun 23, 2021 at 15:14
answered Jun 23, 2021 at 15:03
user123664user123664
1
Sir is it possible to have a Skype call please? I need to ask few things for my understanding.
– Ahmed
Jun 24, 2021 at 16:08
Add a comment
|
|
I would like to plan and test my database recovery in another site (another instance on another server in disaster recovery site).
I take a monthly RMAN level 0 image copy every month and daily incremental level 1 backups.
The database is running in noarchivelog mode. The online redo logs are multiplexed to a disk in the disaster recovery site. Also we have a recovery catalog on another server.
I want to test restoring the recent (yesterday) backup to database in disaster recovery site and then recover to just apply the online redo log files, how to achieve that?
side question: Is it sufficient to recover if we only have a yesterday backup and the online redo logs containing all transactions of today and none of them was overwritten? Since the database is in noarchivelog mode.
What is the use of archivelog mode if we have a daily backup and the redo logs are not overwritten during the day until the backup is taken?
what is the use of backing up archive logs?
|
How to use a recent Oracle backup file (from yesterday) and only online redo logs to recover the database in another location (disaster recovery)?
|
0
I am facing the same issue. @ExcludeLogShippedFromLogBackup parameter value is 'N'. Database recovery model is set to Simple.
My Full Backup and DIFF backup works perfectly. The backup is taken to the Azure Blob container.
Below is the command inside the Step (SQL job)
EXECUTE [dbo].[DatabaseBackup]
@Databases = 'USER_DATABASES,-TBPSmsSystemArchive%',
@Url='https://xxxx.blob.core.windows.net/sql-db-backup',
@BackupType = 'LOG',
@Verify = 'Y',
@CleanupTime = NULL,
@CheckSum = 'Y',
@LogToTable = 'Y',
@BlockSize=65536,
@MaxTransferSize=4194304,
@ExcludeLogShippedFromLogBackup = 'N'
Share
Follow
answered Jun 8, 2022 at 19:43
HishHashHishHash
15711 gold badge22 silver badges1111 bronze badges
3
1
I think database would need to be in FULL recovery mode in order to be able to take LOG backups.
– Serdia
Jun 8, 2022 at 19:52
@Serdia I tried with Recovery model to FULL and ran the SQL job for log backup. Still no luck. Any things else that I will need to check?
– HishHash
Jun 8, 2022 at 20:03
@Serdia, I tried with @OverrideBackupPreference='Y' also and it still was not working. The next day when I checked the LOG backups started to work. Only change was that during midnight the DIFF backup happened and I am left to assume that that could a a possible reason for LOG backups to work. Thanks.
– HishHash
Jun 9, 2022 at 13:40
Add a comment
|
|
For some reason LOG backup doesn't bring the .trn file in specified location.
It does work with FULL, DIFF parameters but NOT with LOG.
Is it a bug or I am missing something?
Same folder structure, same account executing code, same permissions.
Database in FULL recovery mode.
No error generated. It just does not deliver file.
EXEC master.dbo.DatabaseBackup
@BackupType = 'LOG', -- changing this to 'FULL' or 'DIFF' works fine
@Databases = 'AdventureWorks2017',
@Directory = '\\path\to\drive\',
@Verify = 'Y',
@Compress = 'Y',
@CheckSum = 'Y',
@CleanupTime = 24,
@LogToTable = 'Y',
@FileName = '{ServerName}${InstanceName}_{DatabaseName}_{Year}{Month}{Day}{Hour}{Minute}.{FileExtension}'
|
'LOG' backup using Ola Hallengren doesn't work
|
0
Since
Elassandra = Elasticsearch + Cassandra, So you need backup from Cassandra on the same time of backup from Elasticsearch.
By design, Elassandra synchronously updates Elasticsearch indices on the Cassandra write path. Therefore, Elassandra can backup data by taking a snapshot of Cassandra SSTables and Elasticsearch Lucene files on the same time on each node, as follows :
For Cassandra SSTables use:
nodetool snapshot --tag <snapshot_name> <keyspace_name>
And for index files use copy them by:
cp -al $CASSANDRA_DATA/elasticsearch.data/<cluster_name>/nodes/0/indices/<index_name>/0/index/(_*|segment*) $CASSANDRA_DATA/elasticsearch.data/snapshots/<index_name>/<snapshot_name>/
However there is a documentation on Elassandra to Backup and Restore.
Share
Follow
answered May 27, 2021 at 7:49
Majid HajibabaMajid Hajibaba
3,17066 gold badges2424 silver badges5757 bronze badges
Add a comment
|
|
We have three node cluster for Cassandra/Elassandra and I needs to setup backup for this. I used "nodetool snapshot" command for taking backup, but as we are using elasticserach so do I need to take separate backups of Indices or taking backup from "nodetool snapshot" is enough for this.
if separate backup is required for indices then can you pls suggest me how to take backup/restore because there is no proper documentation for taking elassendra backup/restore
Thanks
|
Elassendra backup Solution
|
0
This feature hot-backup does it.
However, this is not avilable in the free edition of Hazelcast.
Share
Follow
answered May 26, 2021 at 16:56
Neil StevensonNeil Stevenson
3,0701010 silver badges1212 bronze badges
1
any alternative?
– roliveira
Aug 29, 2023 at 11:57
Add a comment
|
|
I want to perform Hazel cast back up and restore activity on Kubernetes environment from one AKS cluster to another AKS cluster. If anyone has performed in past or Is there any documentation is available to do that. I just started to learn Hazel cast your support will be appreciable.
I am using Embedded version 4.0.
|
Hazelcast Backup and restore
|
0
Hello i dont know why you say that vmware got corrupted, i'll recommend you not to copy your templates from vmware to aws, just use packer and automate creation for both platforms. It's easy and there are a lot of examples.
Share
Follow
answered May 22, 2021 at 0:54
JMHerrerJMHerrer
11222 bronze badges
1
Hi JMHerrer, Can you share some github code or sample where i can get this way around?
– Abhishek Srivastava
May 23, 2021 at 16:09
Add a comment
|
|
Vmware Esxi support very few pip or python packages. We have cases where vmware were getting corrupted and our templates are getting loosed. Also we require periodic backups of templates on aws s3. I tried the following ways to do:
Copied the virtual env from my local to esxi server through command line - But it failed as python not supported by Vmware Esxi
Copied awscli binary to /bin/ path in Vmware Esxi - but packages found to be missing
Can anyone help me provide some solution to upload templates directly from Vmware to aws s3 ?
|
AWS S3 backups for VMware templates
|
0
You'd have to write something custom to do a call to get all of the tables with this tag and then for each in the list, run the on-demand backup API call.
Though I have to ask, why not enable continuous backups on these tables and not worry about backups being done? Then you get point in time recovery to any second in the last 35 days.
Share
Follow
answered May 18, 2021 at 16:25
NoSQLKnowHowNoSQLKnowHow
4,6052424 silver badges3636 bronze badges
Add a comment
|
|
Is it possible to take DynamoDB Tables in one go based on a Tag Value ?
I have about 30 tables that needs to be backedup . I have created a Tag called " Backup " and assigned a value " daily " . Is it possible to take a backup of all these tables in one-go based on the Tag-Value ?
|
Is it possible to take DynamoDB Tables in one go based on a Tag Value?
|
0
I think it is false but I do not have documentation
Share
Follow
answered May 6, 2021 at 3:37
TedTed
23.3k1111 gold badges9797 silver badges113113 bronze badges
Add a comment
|
|
I checked AWSCloudFormation dynamodb documentation and since PointInTimeRecoveryEnabled is optional, what would be the default value if it is not provided?
|
What is default for PointInTimeRecoveryEnabled in AWS DynamoDB?
|
I found out that Hyper Backup can save snapshots in time, so I'm using it instead of Snapshot Replication
|
I have data on several machines that I want to backup in away that I can restore to certain points in time.
From what I read Snapshot Replication achieves this (as opossed to back-up that clobbers previous results).
The main motivation is that if the data files are ransacked, and encoded, then if I just back-up I can end up in a state where the backed up files are also encrypted.
One way to do this is by using 2 Synology NAS machines where I can have:
rsync processes to back-up files from multiple machines into a NAS1
apply Snapshot Replication from NAS1 to NAS2
In this way, if the data is hijacked at certain point, I can restore the data to the last good state by restoring NAS2 to previous point in time.
I would like to know if:
Snapshot Replication is the way to go, or there are other solutions?
are there other ways to achieve Snapshot Replication, e.g. with single NAS?
I have an older Synology 2-Bay NAS DS213j.
Assuming that I buy a second, newer, NAS (e.g. DS220j), are the 2 NAS machines expected to work together?
Thanks
|
How to implement Snapshot Replication
|
Your disk is probably formatted with APFS which wasn't fully supported until High Sierra. Time Machine itself used the HFS+ format until Catalina, only in Big Sur will Time Machine use APFS.
If the disk is APFS you need a High Sierra or new OS to mount the disk. Even having achieved that you may only be able to access the Time Machine backup manually. If the backup is a "remote" backup and is in a sparsebundle image you can double-click to mount it, if the extension is backupbundle change it to sparsebundle and it will mount.
Copying out your files is easy up to Catalina. Under Big Sur your backup disc may look empty as Apple has made entires invisible. To address that use the Terminal to move to the disc (in /Volumes) and you will probably find you can open the hidden folder back in the Finder (using open <folder> in Terminal).
HTH
|
I took the backup of my system on Time Machine disk , at that time it was running Mac Big Sur. Now i want to restore the data on Mac Capitan but its not Recognising the external hard drive.
Do i need to upgrade it to Big sur and then try restoration of data or it's possible on Capitan.
The disk is visible at Disk Utility section, but cannot be accessible.
Hope to see the solution soon.
Thank you in advance.
|
Time Machine Disk not Recognising
|
0
Use declare -p to reliably serialize variables regardless of their type
#!/usr/bin/env bash
if [ -f saved_vars.sh ]; then
# Restore saved variables
. saved_vars.sh
else
# No saved variables, so lets populate them
declare -A dikv=([foo]="foo bar from dikv" [bar]="bar baz from dikv")
declare -A dict=([baz]="baz qux from dict" [qux]="qux corge from dict")
fi
# Serialise backup dikv dict into the saved_vars.sh file
declare -p dikv dict >'saved_vars.sh'
printf %s\\n "${!dict[@]}"
printf %s\\n "${dict[@]}"
printf %s\\n "${!dikv[@]}"
printf %s\\n "${dikv[@]}"
Share
Follow
answered Apr 26, 2021 at 8:24
Léa GrisLéa Gris
18.4k44 gold badges3636 silver badges4646 bronze badges
3
Not sure to understand this, I run your script and always generates a saved_vars.sh with this content: declare -A dikv declare -A dict
– aicastell
Apr 26, 2021 at 14:37
Are you sure to run bash and not some other shell?
– Léa Gris
Apr 26, 2021 at 15:00
$ bash --version GNU bash, versión 5.0.17(1)-release (x86_64-pc-linux-gnu) Copyright (C) 2019 Free Software Foundation, Inc. Licencia GPLv3+: GPL de GNU versión 3 o posterior <gnu.org/licenses/gpl.html>
– aicastell
Apr 27, 2021 at 5:36
Add a comment
|
|
This question already has an answer here:
How to store state between two consecutive runs of a bash script
(1 answer)
Closed 2 years ago.
I wrote this two simple functions to backup and restore the content of a bash dictionary:
declare -A dikv
declare -A dict
backup_dikv()
{
FILE=$1
rm -f $FILE
for k in "${!dikv[@]}"
do
echo "$k,${dikv[$k]}" >> $FILE
done
}
restore_dict()
{
FILE=$1
for i in $(cat $FILE)
do
key=$(echo $i | cut -f 1 -d ",")
val=$(echo $i | cut -f 2 -d ",")
dict[$key]=$val
done
}
# Initial values
dikv=( ["k1"]="v1" ["k2"]="v2" ["k3"]="v3" ["k4"]="v4")
backup_dikv /tmp/backup
restore_dict /tmp/backup
echo "${!dict[@]}"
echo "${dict[@]}"
My questions:
As you can see, these two funcions are very limited as the name of the backuped (dikv) and restored (dict) dictionaries is hardcoded. I would like to pass the dictionary as an input ($2) argument, but I don't know how to pass dictionaries as funcion arguments in bash.
Is this method to write keys and values into a file, using a string format ("key","value") and parse that string format to restore the dictionary, the unique / most eficient way to do that? Do you know some better mechanism to backup and restore a dictionary?
Thanks!
|
Improving functions to backup and restore a bash dictionary [duplicate]
|
chown can be uded to change ownership of a file or firectort
chown -R root:user /dir
changes the file ownership to user
-R perform the command recursively in a directory
same for
chmod -R 600 /dir
then you specify your permission and ownership in your own format
|
So in my work i did a mistake and it change all the file permissions (file owner and group) recursively inside a folder. I have the backup of that folder. I'm working inside a nsf server and my idea is to copy the permissions of the backup directory to the directory where i did the mistake recursively. All the folders and files have the same name and i want to grab the permissions from each file/directory from the backup and copy the permissions to the original directory, recursively.
|
How to recursively change permissions with backup file comparing permissions
|
0
QuestDB appends data in following sequence
Append to column files inside partition directory
Append to symbol files inside root table directory
Mark transaction as committed in _txn file
There is no order between 1 and 2 but 3 always happens last. To incrementally copy data to another box you should copy in opposite manner:
Copy _txn file first
Copy root symbol files
Copy partition directory
Do it while your slave QuestDB sever is down and then on start the table should have data up to the point when you started copying _txn file.
Share
Follow
answered Apr 19, 2021 at 9:07
Alex des PelagosAlex des Pelagos
1,34588 silver badges99 bronze badges
Add a comment
|
|
I am running QuestDb on production server which constantly writes data to a table, 24x7. The table is daily partitioned.
I want to copy data to another instance and update it there incrementally since the old days data never changes. Sometimes the copy works but sometimes the data gets corrupted and reading from the second instance fails and I have to retry coping all the table data which is huge and takes a lot of time.
Is there a way to backup / restore QuestDb while not interrupting continuous data ingestion?
|
Can I copy data table folders in QuestDb to another instance?
|
0
I would strongly encourage you to look at the dbatools powershell module for this. For your needs:
$server = 'yourServer';
$backups = get-dbabackupinformation -SqlInstance $server -Path 'D:\BACKUP';
$backups | restore-dbadatabase -SqlInstance $server -withreplace;
Note - restore-dbadatabase has a bunch of options. The defaults are pretty good for most cases but see what's available.
But also! Two things grab my attention about your sample:
You're restoring the master database. That should only be done if something extraordinary happened (e.g. disk corruption, an admin or malicious actor trashed the server, etc).
You're restoring the same database from multiple different files without specifying with norecovery.
I'd guess that both of these are attributed to fast and loose with the sample code. But it's worth calling out as restoring master will have consequences for your server and restoring multiple files could make your restore take longer than it should (e.g. if each of those backups represents a daily backup, restoring only the one that represents your desired RPO makes sense).
Share
Follow
answered Apr 6, 2021 at 19:32
Ben ThulBen Thul
31.7k44 gold badges4646 silver badges7171 bronze badges
Add a comment
|
|
I have list of SQL databases taken as a backup and stored in D:\Backup\ drive, Task is to restore all backups, back to SQL Server. Looking for stored procedure which will open each file from directory and restore all files one by one to server.
Sample list of Databases
restore database master from disk='D:\BACKUP\DB1.bak' with replace
restore database master from disk='D:\BACKUP\DB2.bak' with replace
restore database master from disk='D:\BACKUP\DB3.bak' with replace
restore database master from disk='D:\BACKUP\DB4.bak' with replace
restore database master from disk='D:\BACKUP\DB5.bak' with replace
restore database master from disk='D:\BACKUP\DB6.bak' with replace
restore database master from disk='D:\BACKUP\DB7.bak' with replace
Thanks
|
Restore SQL Back ups back to server from Directory
|
0
You are using the mysql command, not the mysqldump command.
Share
Follow
answered Apr 6, 2021 at 11:24
markusjmmarkusjm
2,40911 gold badge1212 silver badges2525 bronze badges
Add a comment
|
|
The backup command line runs for ever and neither generates the data in the backup file nor throughs any error.
THis was working fine earlier, it stopped suddenly, no changes have been done to the DB or DB config.
|
Maria DB backup generates zero KB file with no errors
|
Thank you @user19702 I was so consumed looking at the backup command with the added -BlockSize that I completely ignored the fact that my data has increased (thanks random process I never heard of before today) and even though the powershell is written to start the backup AFTER the SSIS package the process was not done. To find it I started task manager on the machine while the build was running and watched the process stay in memory for a few seconds when the backup started. I added a powershell command to make it wait a few seconds before processing the backup and its working.
In case anyone is wondering this is the command
$result = $package.Execute("false", $null)
Write-Host "Package ID Result: " $result
Start-Sleep -Seconds 10
Backup-SqlDatabase -ServerInstance "$env:ComputerName" -Database "RealDB" -BackupAction Database -BlockSize 4096 -BackupFile $Path
Thank You!!
|
The process: an Azure agent that runs on a Windows 10 32bit pro machine with SQL Server 2014 Express installed.
The pipeline is built and runs successfully with PowerShell scripts as follows:
Create blank database
Create tables needed
C# application runs and populates the tables executed via a PowerShell script
Cross reference tables to update data needed.
Build a SSIS package
After result from SSIS package is success perform a backup
Command:
Backup-SqlDatabase -ServerInstance "$env.ComputerName" -Database "RealDB"
-BackupAction Database -BackupFile $Path -Blocksize 4096
This all works with one exception the actual backup I get is missing the data from the SSIS package run. BUT if I log into the machine and restore the backup used from $Path it is missing the data.
When I query the database after this process the data is there in the database.
There is only one database so its not backing up a different one.
I can run this command in powershell on the machine and my backup has the missing data that the powershell command from the agent does not.
Also interesting enough if I remove the -Blocksize 4096, it works as I expect and the backup has the data in it. I am considering abandoning the powershell due to this but thought I would ask to see if anyone experienced this or no.
Any help or thoughts are appreciated.
Thank you
|
Powershell Backup-SqlDatabase backs up a snapshot instead of a full backup
|
0
Based on the screenshots it appears that the hdbuserstore entries have been made as user root (the location of the SSFS_HDB-files gives this away: /root/.hdb/...).
Since the hdbuserstore information is specific to the OS-user, the entries stored as root will not be available to the sidadm user.
The solution for this is: create the required entries as sidadm or use the -u <username> parameter to set the entries for the user when running as root.
Share
Follow
answered Apr 6, 2021 at 12:00
Lars Br.Lars Br.
10.1k22 gold badges1717 silver badges3030 bronze badges
2
Thanks, I will attempt the above and see where I end up. :-)
– Terrence
Apr 7, 2021 at 5:57
Just a quick update, it turns out that the command did not like the complexity of the password (special characters) and once I put the SYSTEM password in quotes "password" it accepted the command and was able to complete the rest of the tasks.
– Terrence
Apr 7, 2021 at 7:08
Add a comment
|
|
I am trying to run the pre-reg script for Azure HANA backup but keep coming up with the error in below image.
If I open the terminal and run su SIDadm and hdbuserstore list I get the message (screenshot below)/
DATA FILE : /usr/sap/SID/home/.hdb/.../SSFS_HDB.DAT
Also if I run the hdbuserstore set system command I get a no match result.
If I open the terminal from /hana/shared/SID/hdbclient and run hdbuserstore then I can see the system key:
|
Azure SAP HANA Backup - hdbuserstore invalid system key
|
Read/Write/Delete/List permissions are required for both backup and restore. There is a Tech Community article that outlines the requirements.
https://techcommunity.microsoft.com/t5/datacat/sql-server-backup-to-url-a-cheat-sheet/ba-p/346358
|
We have a number of Azure VMs with SQL Server and we utilize Backup to URL and create our database backups in an Azure storage account and Block Blob container. SQL uses a credential created from a SAS policy on the container that grants the permissions needed to backup/restore databases (i.e., 'read', 'write', 'list').
I want to create a new policy that can only be used to restore databases from backups -- i.e., RESTORE DATABASE works, but BACKUP DATABASE/LOG does not. I've tried giving 'read' and 'list' which, I assumed, would be sufficient for restores, but this does not work. I also tried giving all permissions except for 'write' (i.e., 'read', 'add', 'create', 'delete', 'list') and it still failed. It is only when I explicitly grant 'write' to the policy that I'm able to restore a database from backups.
Is there a way to create a shared access signature policy with permissions needed to restore a database from backup, but not create new backups? Or is 'write' access required to simply restore from existing backups?
|
Unable to restore database from Azure storage account without 'write' permission to container
|
0
Execute
\d tout.site_collect
in psql to show the table definition with all its triggers. It must be a trigger or a rule that is defined on the table.
Share
Follow
answered Mar 17, 2021 at 11:12
Laurenz AlbeLaurenz Albe
225k1818 gold badges234234 silver badges303303 bronze badges
1
Thanks. There is indeed a trigger and the issue comes from there. Next step : trying to find how to fix this..
– A.Durant
Mar 17, 2021 at 12:07
Add a comment
|
|
I've got an issue with an INSERT TO statement. Even the most basic one doesn't work :
`INSERT INTO` tout.p_sitetech(st_name)
`SELECT` name
`FROM` tout.site_collect
It returns :
error relation "fond.edi_comm" doesnt exist.
LINE 1: ...concat(left(B.id_comm,2),right(B.id_comm,3)) From fond.edi_c...
But there are no "fond" schema in the database (and therefore no "edi_comm" table).
The database Im using is a backup, I may have not set it up correctly when restoring. But other INSERT INTO work..
It may trigger something in the background ? How do I identify it ?
Thanks in advance for your answers and tips !
|
SQL error on INSERT : "relation does not exists" / but the table is not required
|
You must be sure that you are using the correct flash file with your Jetpack, for the case of version 4.5.1 you could use this code (for other version, please check L4T Driver Package (BSP) in https://developer.nvidia.com/embedded/linux-tegra)
wget https://developer.nvidia.com/embedded/l4t/r32_release_v5.1/r32_release_v5.1/t186/tegra186_linux_r32.5.1_aarch64.tbz2
tar -xjvf tegra186_linux_r32.5.1_aarch64.tbz2
rm tegra186_linux_r32.5.1_aarch64.tbz2
cd Linux_for_Tegra/rootfs
sudo wget https://developer.nvidia.com/embedded/l4t/r32_release_v5.1/r32_release_v5.1/t186/tegra_linux_sample-root-filesystem_r32.5.1_aarch64.tbz2
sudo tar -xjvf tegra_linux_sample-root-filesystem_r32.5.1_aarch64.tbz2
sudo rm tegra_linux_sample-root-filesystem_r32.5.1_aarch64.tbz2 && cd ..
sudo ./apply_binaries.sh
sudo /bin/bash ./flash.sh -r -k APP -G nvidia.img jetson-tx2-devkit mmcblk0p1
[nothing else to edit]
|
I want to create an image from nvidia jetson tx2 using flash.sh file, I could manage run it, but throw an error.
I set board on Recovery Mode and execute:
sudo /bin/bash ./flash.sh -r -k APP -G nvidia.img jetson-tx2 mmcblk0p1
Error: Invalid target board - jetson-tx2-devkit.
I'm using a jetson tx2 p3310-1000, so the name is correct, no matter I tried with jetson-tx2 and nothing.
Ubuntu 18.04.5 LTS, Jetpack 4.5.1
|
Invalid target board - jetson-tx2-devkit
|
0
When talking about DR, the fundamental question is what kind of disasters you think of. Microsoft has its own DR, and I think they have higher reliability than what we might have in a local system.
One issue with a higher chance of happening is some internet issue, and you cannot reach the repository. Against that, the best option is checking them out in a local system.
There is a less probable issue of locking out of your DevOps account, but I imagine that you will be able to rectify it fast.
In a DR discussion about our repos a while back, we thought it is less of a concern because of the way GIT works. GIT maintains a local repository on each of the computers that checkout the code, so you can recover most of the code if worst comes to worst.
But these situations are very subjective, so you will have to think about your case.
Share
Follow
answered Mar 2, 2021 at 23:49
KrishKrish
7933 bronze badges
Add a comment
|
|
I would like to ask a question regarding Disaster Recovery. Earlier we used to have our own source code repo and build server, so we had a Disaster recovery plan as to restore from a backup, in case if something fails. When we moved to Azure Devops, everything including Repo , Build Piplines etc is managed by Microsoft. In that case what would be the recommended Disaster recovery strategy?
Standard answer - most deletion operations in Azure DevOps are recoverable- is not valid in our case.
Backrightup also doesn't suit in our situation.
|
Backup Azure DevOps Repositories, Workitems etc
|
0
Try overwriting your database file from getDatabasePath with the backup file you exported.
You might have to restart the app after this.
Share
Follow
answered Feb 27, 2021 at 2:47
Dan ArtillagaDan Artillaga
1,59511 gold badge1010 silver badges2020 bronze badges
Add a comment
|
|
I'm trying to implement import / export functionality for my app, which uses a Room database.
Exporting is implemented by just copying the database file given by getDatabasePath, but I've hit a problem when implementing the Import functionality.
I've tried using Room.databaseBuilder(...).createFromFile() but it does not help, since it only works when creating the database.
I've looked at other answers to similar questions which suggest overwriting the database file, but I'd preferably like something a bit less "hacky" (When I tried it it didn't seem to work either).
If possible, importing the data should be:
Destructive - remove all prior data in the room database.
possible from a File, Uri or InputStream.
Possible on runtime, so users don't need to restart the app.
Hopefully, I won't need to manually transfer the data between the databases.
|
Import data from sqlite3 database file to Room database on runtime
|
0
of couse I entered the correct database name and password. but I want to backup everything (using the script https://wiki.postgresql.org/wiki/Automated_Backup_on_Linux)
I got one copy of .pgpass in /home/ (there is no subfolder in /home/) and one in the folder where I got my script right now (/var/www/backup_scripts/)
If I start the script It asks me 3 times for a password:
Performing globals backup
--------------------------------------------
Globals backup
Passwort:
Performing schema-only backups
--------------------------------------------
Passwort für Benutzer postgres:
Performing full backups
--------------------------------------------
Passwort für Benutzer postgres:
Share
Follow
answered Mar 2, 2021 at 10:48
user3082653user3082653
7111 gold badge11 silver badge1010 bronze badges
Add a comment
|
|
I am trying to use the scripts from the posgresql wiki: https://wiki.postgresql.org/wiki/Automated_Backup_on_Linux
as an automated solution in a cronjob. My main problem is that the script asks for the password. Thus automation is not possible.
I read in multiple places that you can create a ".pgpass" which can solve all problems. So I created the file containing this info:
# file: .pgpass
# hostname:port:database:username:password
localhost:5432:DB:postgres:"supersecretpassword"
and changed the rights to 0600 (chmod 0600 /.pgpass).
But it didn't change the outcome at all I located it in several places with no success (root, script folder, ...)
My second problem is that I want to use the entire script backing up everything having multiple daily and weekly backups and from what I understand this only allows to use the password for one single database (DB)
|
Postgresql backup cron job using the script from the wiki
|
I definitely wouldn't recommend using Backup & Migrate for this - that's so Drupal 7! Drupal 9 has better tools that are baked into core!
There are many possible ways to import/export Config and Content entities across environments, but I'll share what I believe to be the current best practices.
For Configuration, Drupal 9 has a built-in Configuration Management system that makes it quite easy to migrate Config across environments. It can be used within the Drupal UI, and also integrates with Drush (a command-line tool for Drupal).
Essentially, the Config system exports all Config settings as standardized YAML files, which can easily be included in your Git repository. This will make it incredibly easy to set up another Drupal environment that is identical in its Config settings.
More info on Configuration Management can be found here.
For Content, Drupal 9 has a built-in Migrate API, which facilitates migrations from any data source into a Drupal 9 environment. That means you could set up a migration that would allow you to migrate your Content entities across environments.
I've only ever used this for migrating Content (migrated from Drupal 7), but I believe it's also possible to use this to migrate Config as well.
If you decide to use the Migrate API, you may (depending on the setup of your content) need to install Migrate Tools and Migrate Plus.
More info on the Migrate API can be found here.
|
I'm very new to Drupal, so please don't be too mad in case I have any major misunderstandings :) I've tried searching for a similar problem, but is just couldn't find a suitable solution for my case.
We're currently setting up a Drupal 9 project, which will perspectively have a shared development environment and a production environment as well as a local instance to develop on. I'd wish to have a way to synchronize those instances to have the same configuration, content types and optionally even content.
At the moment, I'm developing a theme locally, which means I have installed a Drupal instance inside a XAMPP server. That theme is versioned by git, so it is migratable to another developer without a problem.
For migrating the structure and content (which is obviously saved in the database), I tried using Backup & Migrate, but there were two issues I was facing: The D9 version is not fully supported yet, so an installation via composer fails with default security settings, and there seems to be an already multiple times reported bug when trying to backup the entire site. You can workaround it by backing up the database and the files separately, but this is pretty inconvenient due to other issues (but let's keep it a little short...).
I also tried to export the whole database, which is actually working (after this little fix), but the overhead seems a little high for me. Especially when I just want to copy new content types from dev to prod environment without users, content and so on, for instance.
So, to finally come to an end, is there any best practice for this case? Or should I even consider to go a whole other way?
Thanks in advance!
|
Migrating structure and content between instances in Drupal 9
|
I finally found the problem.
Do not use the "id" field if you want to add a new row.
|
To create an archive from my table named store, I would like to backup a row and stock it in a specifique table (named:audit) before an update.
And after I would like to backup the same row and stock it in a specifique table (named:histo) after an update.
I thought of a TRIGGER.
Like this, but it does not work because there is 2 INSERT INTO
BEGIN
IF (NEW.storage_0 != OLD.storage_0 OR NEW.storage_1 != OLD.storage_1)
THEN
INSERT INTO audit(id,Date_insert,name,storage_1,storage_2) VALUES (OLD.id,OLD.Date_insert,OLD.name,OLD.storage_1,OLD.storage_2);
INSERT INTO histo(id,Date_insert,name,storage_1,storage_2) VALUES (NEW.id,NEW.Date_insert,NEW.name,NEW.storage_1,NEW.storage_2);
ELSEIF (NEW.Date_insert IS NULL)
THEN
INSERT INTO audit(id,Date_insert,name,storage_1,storage_2) VALUES (OLD.id,OLD.Date_insert,OLD.name,OLD.storage_1,OLD.storage_2);
INSERT INTO histo(id,Date_insert,name,storage_1,storage_2) VALUES (NEW.id,NEW.Date_insert,NEW.name,NEW.storage_1,NEW.storage_2);
END IF;
END
|
How to create 2 archives using a TRIGGER?
|
0
I just read the rclone documentation and it looks like the --ignore-existing is almost especially for preventing ransomware/encryption attacks according to the docs:
--ignore-existing Using this option will make rclone unconditionally skip all files that exist on the destination, no matter the content of
these files.
While this isn't a generally recommended option, it can be useful in
cases where your files change due to encryption. However, it cannot
correct partial transfers in case a transfer was interrupted.
So I think it will work to prevent that.
Share
Follow
answered Feb 13, 2021 at 18:53
T. LacyT. Lacy
1111 silver badge33 bronze badges
Add a comment
|
|
I am rclone backing up files multiple times a day. I would like my backup server to be a recovery point from ransomware or any other error.
Am I correct that if I do a
rclone copy --ignore-existing
, my backup server is safe from the ransomware. If all of my files on my main server get encrypted the file name would stay the same and they wouldn't overwrite my backup server files with the encrypted files because I have --ignore-existing. It will ignore any size/time/checksum changes and not transfer those files over because they already exist on the back up? It won't transfer over the encrypted files that overwrite my existing good files?
I could then delete my main server and copy everything from my recovery over to the main and restore everything?
|
Will rclone --ignore-existing prevent ransomware damages?
|
0
import os
import datetime as dt
now = dt.datetime.now()
ago = now-dt.timedelta(days=180)
for root, dirs,files in os.walk('.'):
for fname in files:
path = os.path.join(root, fname)
st = os.stat(path)
mtime = dt.datetime.fromtimestamp(st.st_mtime)
if mtime > ago:
print('%s modified %s'%(path, mtime))
Output
Share
Follow
answered Feb 10, 2021 at 9:25
SumanSuman
40155 silver badges1515 bronze badges
2
Hi, I think it's not work, it does print the last modified date out, but it does not filter it, because I want to back up the file within 6 months only, grater than 6 months no need to back up, wondering do u know how to fix it ? Thanks Suman !!
– 820okok
Feb 10, 2021 at 9:40
Yes give me 10 mintute.
– Suman
Feb 10, 2021 at 10:08
Add a comment
|
|
I am new python learner, I am doing backup automation with python. However, I would like to only backup the file with last modified date within 6 months, how can I type in python ? Many Thanks !!!!!
|
Python backup the file with last modified date within 6 months
|
0
"If I restore full backup with norecovery, would I be able to restore 2/2/2021 differential backup?" --- YES
"how differential backup on 2/2/2021 would know about all the changes since 1/1/2021" --- through bitmap . "The differential backup operation relies on a bitmap page that contains a bit for every extent. For each extent updated since the base, the bit is set to 1 in the bitmap."
Share
Follow
answered Feb 5, 2021 at 10:31
kaiqiang zhangkaiqiang zhang
111 bronze badge
Add a comment
|
|
I created full backup on 1/1/2021. Then i stopped doing full backups.
For a month I was doing just differential backups.
If I restore full backup with norecovery,
would I be able to restore 2/2/2021 differential backup?
I don't know how differential backup on 2/2/2021 would know about all the changes since 1/1/2021
|
SQL Server differential backups in a Simple recovery model
|
It turns out I had two versions of Postgres installed simultaneously, so I was able to backup with a simple 'pg_dumpall' before upgrading the clusters. This manpage and this blogpost were both helpful in sorting things out.
Because I had a lot of onboard storage free I let pg_upgradecluster populate the default data folder, and then I copied it all over to my external, and edited the conf file to point back to that. (And then, yes, I made a backup of the upgraded cluster.)
|
So, I was careless and upgraded Ubuntu from 18 to 20 -- thus postgresql from 10 to 12 -- WITHOUT making backups of my postgresql-10 cluster. Now that I'm looking into upgrading the cluster to work with 12, I'm realizing that was a mistake. Is there a way to back them up before attempting to upgrade them, now that postgres itself is already upgraded?
I could just copy the whole data folder and zip it up somewhere, but (a) that'd be a lot of disk space, and (b) I definitely don't yet understand postgres well enough to restore from those files.
(The last annoying thing here, which maybe deserves its own question, is that my pg10 data directory is on an external drive, which I'd like to keep using. Even if I can solve my backup problem, I'm not sure what the "easiest" way to do this is...)
EDIT: Actually I think my problem is a little different than I thought, and the postgres backup tools might still work for me. I will report back!
|
Backing up old clusters AFTER upgrading Postgresql
|
Well, try this
Sub createBackUp()
Dim sURL As String
Dim aURL As Variant
Dim saveTime As String
sURL = ThisComponent.getURL()
If Trim(sURL) = "" Then Exit Sub ' No name - cannot store
saveTime = "_" & FORMAT(Now,"YYYYMMDD\_HHmmSS")
aURL = Split(sURL, ".")
If UBound(aURL) < 1 Then ' No extention?
sURL = sURL & saveTime
Else
aURL(UBound(aURL)-1) = aURL(UBound(aURL)-1) & saveTime
sURL = Join(aURL,".")
EndIf
On Error Resume Next
ThisComponent.storeToURL(sURL,Array())
On Error GOTO 0
End Sub
Also you can try Timestamp Backup
|
Is there a way, setting, macro, or otherwise, that can automatically create backups of the current document in a series? Such as, working on a Writer document, pressing a macro button, and creating a backup at that time, so that there is another backup added to the previous backups in a folder?
|
Libreoffice Multiple Backups?
|
0
Configure Auditing or a Trace to track the additional information.
You can then look back over the output logs and see where it came from.
Share
Follow
answered Jan 19, 2021 at 7:09
Martin CairneyMartin Cairney
1,74911 gold badge77 silver badges1818 bronze badges
Add a comment
|
|
Every day, a job is connecting to a 2005 SQL Server and performs a dump transaction with truncate_only (or no_log) and breaks my transactions backups sequence.
I checked the Agent and the task scheduler, none of them is hosting such a job.
It occurs every day at 2:38PM, this I know from the server log showing this kind of error :
BACKUP LOG WITH TRUNCATE_ONLY or WITH NO_LOG is deprecated. The simple recovery model should be used to automatically truncate the
transaction log.
After digging in the profiler I could not see any column showing the IP of the sessions, either, I could continuously pull data from this query :
select * from
sys.dm_exec_connections A,
sys.sysprocesses B
where A.session_id = B.spid
But I wonder if I can catch the job since the transaction segment is very small.
On an other side, it would be nice if I could hang the backup itself by locking the transaction file, so I would have the time to see which process is stuck trying to dump the transaction.
Any ideas?
|
How to catch a dump transaction in an SQL Server 2005
|
0
There are just two commands you need for this:
GRANT LOCK TABLES, SELECT ON DATABASE_NAME.* TO 'BACKUP_USER'@'%' IDENTIFIED BY 'PASSWORD';
Followed by:
FLUSH PRIVILEGES;
Hope this helps 👍🏻
Share
Follow
answered Jan 15, 2021 at 5:24
matigomatigo
1,39111 gold badge88 silver badges1717 bronze badges
1
Not worked. Got the below error. ``` mysqldump: Error: 'Access denied; you need (at least one of) the PROCESS privilege(s) for this operation' when trying to dump tablespaces mysqldump: Got error: 1044: Access denied for user 'liquibaseuser'@'%' to database 'automation_suite' when selecting the database ```
– ShanWave007
Jan 15, 2021 at 5:31
Add a comment
|
|
I need to add only mysql database table backup permission to one new mysql user account. My mysql version is 5.6. I've tried below command but its not working. Could someone please help?
GRANT SELECT, CREATE, UPDATE, DELETE, LOCK TABLES, RELOAD, SHOW VIEW ON *.* TO 'username'@'%' IDENTIFIED BY 'test123';
|
How to add ONLY mysql database backup permission for a user in mysql 5.6?
|
For videos, image, etc. Amazon S3 is one of the main choices. It also come with a great node API to connect to your S3.
|
I'm adding a chat on my app with node js server, I'll save a cache to my local storage using MongoDB realm but I know that I've to put a limit on doing saving data inside my phone, just some megabytes, that's why I'm thinking to create a backup on another server, right now is on digitalocean but saving all the data there could be expensive thinking about images and videos and of course, being optimistic...
Could you please recommend an option? I was thinking about using AWS Glaciar but I'm very open to other options, thanks for your time!!
|
Use a server for chat backup like Telegram or WhatsApp
|
On Azure DevOps side, after you delete your organization, it's disabled but available for 28 days. If you change your mind during this time, you can recover your organization. After 28 days, your organization and data are permanently deleted.
Prerequisites
An organization deleted within the last 28 days.
Organization Owner permissions to restore your organization.
Recover organization
Sign in to your Visual Studio profile.
On your profile page, go to the lower Organizations Pending Deletion section, and then select Restore.
More details you can refer to the documentation below:
https://learn.microsoft.com/en-us/azure/devops/organizations/accounts/recover-your-organization?view=azure-devops
|
I have create an "Azure DevOps organization" in Azure portal.
I am using this service to store my source code (git repositories).
I am wondering something: What happens if i accidentally delete my azure devops organization in Azure portal ? My account may be hacked too. Are there automatic backups in azure ? This question also applies to all virtual machines and every resource type on azure.
Is there a way to download all my azure data in a single tar file ?
Thanks a lot
|
Is there a way to backup all my azure services datas
|
0
Based on this "(Get-ChildItem $source -Filter .| Sort LastAccessTime -Descending)[0]" you only ever expect to copy 1 file per day. And it sounds like the problem is that the script copies a file over even when no new files have been added to $source. Hopefully I have that right.
Maybe you can add a filter like below, assuming your file is added on a regular schedule
(Get-ChildItem $source -Filter .| Sort LastAccessTime -Descending | ? {$_.LastAccessTime -gt $(get-date).AddDays(-1))[0] #may want to use LastWriteTime or CreationTime in place of LastAccessTime. Can also fiddle with .AddDays - .AddMinutes, .AddHours, etc.
Alternatively you can check your $bkp folder to see if the file exists before copying it:
@(Get-ChildItem $source -Filter *.*| Sort LastAccessTime -Descending)[0] | % {
#check if file exists in $bkp before copying from $source
#"$($_.name)*" part tries to account for the file in $bkp having a timestamp appended to the name
$x = get-childitem $bkp -recurse | ? {$_.name -like "$($_.name)*"}
if(!$x){
Copy-Item -path $_.FullName -destination $("$bkp\$_$timestamp") -force
}
}
Share
Follow
answered Jan 7, 2021 at 18:40
PetePete
3622 bronze badges
Add a comment
|
|
I am working on a script to create a daily backup ( task schedule )
First I copy the folder "source_folder" and rename all the files with timestamp inside the "bkp" folder , when a new files is added in "source_folder" need to copy only the last file and also renamed ( I tried with LastModified or LastAccessTime but when I run the script again( next day) the last file is duplicated if no other file is create in soruce_folder
Any advice ?
$sourceFiles = Get-ChildItem -Path $source -Recurse
$bkpFiles = Get-ChildItem -Path $bkp -Recurse
$syncMode = 1
if(!(Test-Path $bkp)) {
Copy-Item -Path $source -Destination $bkp -Force -Recurse
Write-Host "created new folder"
$files = get-ChildItem -File -Recurse -Path $bkp
foreach($file in $files){
# Copy files to the backup directory
$newfilename = $file.FullName +"_"+ (Get-Date -Format yyyy-MM-dd-hhmmss)
Rename-Item -path $file.FullName -NewName $newfilename
}
}
elseif ((Test-Path $bkp ) -eq 1) {
$timestamp1 = (Get-Date -Format yyyy-MM-dd-hhmmss)
$timestamp = "_" + $timestamp1
@(Get-ChildItem $source -Filter *.*| Sort LastAccessTime -Descending)[0] | % {
Copy-Item -path $_.FullName -destination $("$bkp\$_$timestamp") -force
}
Write-Host "most recent files added"
}
|
Backup folder in Powershell
|
You dont need to iterrate each file, you could do something like:
Dim objFSO
Set objFSO = CreateObject("Scripting.FileSystemObject")
Set wshNetwork = CreateObject("WScript.Network")
strUser = wshNetwork.Username
objFSO.CopyFile "C:\Users\" & strUser & "\Desktop\vbs\*.xlsx", "E:\test2\"
You may also want to set the overwrite flag when doing the copy, it there will be existing files already in the destination folder.
objFSO.CopyFile "C:\Users\" & strUser & "\Desktop\vbs\*.xlsx","E:\test2", True
|
This question already has answers here:
How to copy a file from one folder to another using VBScript
(6 answers)
Closed 3 years ago.
I want to make an automatic backup of my excel files using vbscript.
It works to copy the entire folder but I want to copy only the xlsx files.
Here is the code until now:
Dim objFSO, objFolder, evrFiles
Set objFSO = CreateObject("Scripting.FileSystemObject")
Set evrFiles = objFolder.Files
For Each evrFile in evrFiles
If InStr(1, evrFile.Name, ".xlsx", vbBinaryCompare) > 0 Then
objFSO.CopyFile "C:\Users\Home\Desktop\vbs\" & evrFile.Name, "E:\test2"
End If
Next
WScript.Quit
It throws error on line 5 char 1 "Object required: " "
Any ideas?
LE: I have also tried:
Dim objFSO, objFolder
Set wshNetwork = CreateObject("WScript.Network")
strUser = wshNetwork.Username
Set objFSO = CreateObject("Scripting.FileSystemObject")
Set objFolder = objFSO.GetFolder("C:\Users\" & strUser & "\Desktop\vbs")
Set evrFiles = objFolder.Files
For Each evrFile in evrFiles
If InStr(1, evrFile.Name, ".xlsx", vbBinaryCompare) > 0 Then
objFSO.CopyFile "C:\Users\" & strUser & "\Desktop\vbs\" & evrFile.Name, "E:\test2"
End If
Next
WScript.Quit
But this one gives me "Permission denied on line 9 char 3"
This one works(to copy the entire folder) but I want only the excel files.
Dim objFSO
Set objFSO = CreateObject("Scripting.FileSystemObject")
Set wshNetwork = CreateObject("WScript.Network")
strUser = wshNetwork.Username
objFSO.CopyFolder"C:\Users\" & strUser & "\Desktop\vbs","E:\test2"
|
backup only some files using vbs [duplicate]
|
You did everything correctly. You were just missing a second pair of eyes to see that you have a typo in the second line:
Baclup vs Backup
;-)
|
I'm not a linux expert and need some support to a crontab mystery (for me).
I'd like to do a backup of my raspberry pi twice a week.
It's the same script. But only the every monday trigger (dow=1) executes.
The Friday rule (dow=5) does nothing at all - no backup saved.
I can't see why.
What's going wrong? Where can I find out what's going wrong?
# Output of the crontab jobs (including errors) is sent through
# email to the user the crontab file belongs to (unless redirected).
#
# For example, you can run a backup of all your user accounts
# at 5 a.m every week with:
# 0 5 * * 1 tar -zcf /var/backups/home.tgz /home/
#
# For more information see the manual pages of crontab(5) and cron(8)
#
# m h dom mon dow command
0 4 * * 1 /home/pi/Backup/backup.sh > /dev/null
0 4 * * 5 /home/pi/Baclup/backup.sh > /dev/null
screenshot of crontab -e
|
crontab: same script is triggered only on one day
|
0
You cannot restore a backup of a Hyper-V VM, stored in Azure, to Azure as an Azure VM. Because currently this is not a supported scenario. You can only restore to an on-premises host.
FAQ reference URL - https://learn.microsoft.com/en-us/azure/backup/backup-azure-dpm-azure-server-faq#can-i-restore-a-backup-of-a-hyper-v-or-vmware-vm-stored-in-azure-to-azure-as-an-azure-vm
Share
Follow
answered Jan 6, 2021 at 18:33
SadiqhAhmed-MSFTSadiqhAhmed-MSFT
17144 bronze badges
Add a comment
|
|
I have been having trouble finding specific answers to my questions about Azure Backup Server. Basically, I have a client with a Hyper-V Host and two guests. All are running Server 2019. Does Azure Backup Server provide the mechanism to easily restore and spin up these servers in the Azure cloud for quarterly testing, or will I need to create an Azure cloud host to perform this testing?
|
Azure backup server
|
0
The Azure Backup service provides simple, secure, and cost-effective solutions to back up your data and recover it from Azure. You can back up anything from on-premise data to Azure File Shares and VMs, including Azure PostgreSQL databases.
To start off, you can learn more about Azure Backup here. This article summarizes Azure Backup architecture, components, and processes. While it’s easy to start protecting infrastructure and applications on Azure, you must ensure that the underlying Azure resources are set up correctly and being used optimally in order to accelerate your time to value.
To learn more about the capabilities of Azure Backup, and how to efficiently implement solutions that better protect your deployments, detailed guidance and best practices have been described to design your backup solution on Azure.
For additional reading, also refer to some Frequently asked questions about Azure Backup.
Share
Follow
answered Jan 3, 2021 at 7:53
Bhargavi AnnadevaraBhargavi Annadevara
5,17222 gold badges1515 silver badges3434 bronze badges
Add a comment
|
|
Looking for a link on Azure Backup lifecycle management or help file will also do or anyways to design the Backup Lifecycle management
|
Azure backup lifecycle management
|
To copy a full filesystem from a remote host, you can use the command:
ssh user@hostname 'sudo tar -cJf - --acls --selinux --one-file-system /' > full_dump.tar.xz
Then just unzip tar.xz into the current directory. To do this, use something like this:
tar xf linux_full_dump.tar.xz
|
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
I need to copy a drive from remote host. I can't do it just with scp, maybe there are some workarounds and solutions for this?
Operating system is ubuntu 20.04
If you need more information, please ask me for more information.
I have searched on Google but doesn't work...
I am very thanks for you.
|
Backup latest ubuntu server [closed]
|
0
Moving publications to a new repository will be a more substantial undertaking!
But your recent problem seems just that you are either not on the right container or in the right directory for executing the dspace command. Thus it is "not found". Make sure to execute dspace on the dspace container and specify the right/complete path. The dspace command is located in
/path/to/your/dspace-deployement-directory/bin.
Share
Follow
edited Dec 17, 2020 at 14:01
answered Dec 17, 2020 at 13:52
MartinWMartinW
5,00122 gold badges2525 silver badges6161 bronze badges
Add a comment
|
|
I am using the cloned dspace 6-x branch and installed it via docker. Can someone help me with the backup of my local database (Communities, collections, items)to a remote database?
According to the documentation we need to use the command:
dspace packager -s -t AIP -e eperson -p parent-handle file-path
But it returns an error: dspace is not a command
Anyone could help me transfer my local database to my remote repo?
Thanks!
|
AIP backup - using Docker
|
0
use --exclude='pattern' where pattern can be file or directory eg '/dir/file*'
there is another option too, ie --exclude-from
Share
Follow
answered Dec 14, 2020 at 21:56
v-nodev-node
2111 bronze badge
2
Thank you for your answer. Do you know specifically which files should I exclude? Namely, a mysql table to which files in the mysql directory correspond to?
– Elisavet
Dec 14, 2020 at 22:06
Sorry, didnt realize you were trying to ignore tables. Best bet would be mysqldump with --ignore-table option. I wont advise physical copy in this case since it may mess up dictionary entries.
– v-node
Dec 15, 2020 at 3:20
Add a comment
|
|
We take every day the backup of a folder from production to uat with rsync.
That folder, among others, contains the physical files of a mysql database.
Is it possible to exclude some tables during the backup process?
Edit: If yes, which files should I exclude in the mysql directory?
Thank you!
|
mysql : Is it possible to ignore some tables in cold backup?
|
No answer, but everything was re-entered as inserts by parsing the file.
|
At the recommendation of my boss (who doesn't remember saying it), I backed up data from several tables into an .ipynb file using the Tasks->Export Data... command in SSMS. I have finished the task that had me to set aside the data I was working with and I now need to return to my original task.
When I use Task->Import Data..., none of the delimiters are accepted by the SQL Server Import and Export Wizard. A Google search for how to import brings up plenty of Python and Jupyter articles and links, but none seem to apply. Help.FastHosts.UK has a fine tutorial, but fails to mention the delimiter selection that would allow an .ipynb file to be imported.
Can anyone tell me how to reimport data from an .ipynb stream back into SQL Server?.
SQL Server Import and Export Wizard
|
Reimport SQL Server backup from an .ipynb file
|
0
Make sure that "Bayne_DB" owns all affected objects.
Share
Follow
answered Nov 17, 2020 at 11:26
Laurenz AlbeLaurenz Albe
225k1818 gold badges234234 silver badges303303 bronze badges
Add a comment
|
|
I connected to my db using this command
psql -U bm_clients -d Bayne_DB
and then I tried to run this command
Bayne_DB=> GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA abcd TO bm_clients;
For which I received this error
ERROR: permission denied for relation provider_seq
How to resolve this?
|
getting permission denied for relation in potgresql
|
0
I believe that the backup file is empty because in the Opera AndroidManifest.xml file the flag allowBackup is set to false.
As a workaround you can do the following:
decompile the apk using apktool
change the AndroidManifest and set allowBackup=true
rebuild the Opera apk using apktool
backup again and most likely the file size won't be empty
Share
Follow
answered Nov 17, 2020 at 10:09
LinoLino
5,54433 gold badges2222 silver badges4343 bronze badges
Add a comment
|
|
I want to inspect my opera browser's appstate.bin file. If I execute adb backup com.opera.browser the resulting backup.ab is empty after converting to tar format.
|
Is there a way to use adb to get an android app's appstate.bin without rooting the phone?
|
0
you can setup cronjob for rman backup database daily or weekly.
if you want to keep backup for a day or a week, just set retain policy to a week or a day, depend on your need.
Share
Follow
answered Nov 12, 2020 at 3:41
scott yuscott yu
12511 silver badge33 bronze badges
3
Could you provide me how can I setup cronjob for rman backup database daily or weekly?
– Grant
Dec 3, 2020 at 4:24
if you run in unix/linux, you can run crontab -e , this will open crontab , crontab you can place how often, you are running scipt, that is how you schedule rman backup job. see man7.org/linux/man-pages/man5/crontab.5.html
– scott yu
Dec 3, 2020 at 12:12
the following is rman cronjob example. see dba.stackexchange.com/questions/139309/cron-and-rman-backups
– scott yu
Dec 3, 2020 at 12:13
Add a comment
|
|
My db version is oracle 12c. OS: oracle linux
I have working sh file
export ORACLE_SID=mydborcl
export ORACLE_HOME=/u01/app/oracle/product/12.1.0/db_1
export PATH=$ORACLE_HOME/bin:$PATH
export NLS_LANG='AMERICAN_AMERICA.AL32UTF8'
expdp adminuser/Mypassword@mydborcl schemas=adminuser directory=my_db1 dumpfile=adminuser_`date +%Y%m%d`.dmp logfile=adminuser_`date +%Y%m%d`.log
How can I call that sh file daily or weekly?
How to set it automatically delete the old backup?
|
How do I call sh file daily? ora_backup
|
The problem was with the DIR path. As other paths have Environment.getDataDirectory() + their path, the directory path was without getDataDirectory, so it had only one data in path. Now with data/data it worked and the directory is created!
String dir = "/data/data/my_package/bkp/";
|
I want to achieve a simple thing - after pressing a button, I want to copy the application database file (from database folder) to a newly created folder BKP (create if not exists).
The problem is, I always get
java.io.FileNotFoundException: /data/data/my_package/bkp/mydb.db (No such file or directory)
I tried it like this:
public void backUp() {
try {
File data = Environment.getDataDirectory();
String currentDBPath ="/data/my_package/databases/mydb.db";
String backupDBPath = "/data/my_package/bkp/mydb.db";
String dir = "/data/my_package/bkp/";
File directory = new File(dir);
if (! directory.exists()){
directory.mkdir(); }
File currentDB = new File(data, currentDBPath);
File backupDB = new File(data, backupDBPath);
FileChannel src = new FileInputStream(currentDB).getChannel();
FileChannel dst = new FileOutputStream(backupDB).getChannel();
dst.transferFrom(src, 0, src.size());
src.close();
dst.close();
Toast.makeText(getApplicationContext(), "Backup is successful ", Toast.LENGTH_SHORT).show();
} catch (IOException e) {
e.printStackTrace();
}
}
It seems the bkp directory is not even created.
However not sure if this is a good approach, I only want to backup one database file, because if the user will have a new device or uninstalls and reinstalls the app, the db will be empty.
Maybe a good solution would be to download the db file to general Download folder, so the user could copy it anywhere and restore.
I want to have some backup (not cloud solutions) and no external drive, as nowadays the disk spaces are large for androids and only a few people still uses SD cards.
|
Android create local folder and backup database file there
|
0
You can use AWS CloudTrail to look for the API events that created the Snapshots. A user identity (eg IAM Role or IAM User) will be associated with those events.
That should help you figure out how/what is creating those snapshots each night at 12:45am.
Share
Follow
answered Nov 9, 2020 at 20:28
John RotensteinJohn Rotenstein
254k2626 gold badges408408 silver badges497497 bronze badges
Add a comment
|
|
in my AWS snapshots, I see, that there are snapshots created without policy, and I don't know, hot to disable them.
In my "Lifecycle Manager" I see only one policy, and it creates snapshot every day, and keep them for 2 weeks.
Those snapshots have description added "Created for policy: policy-0fd537dfc2b885c39 schedule: Daily".
And kept only for 2 weeks, then deleted automatically.
But there are also some snapshots without description, which aren't deleted and kept forever. Their creation date is about 11:45PM (East Europe time).
Snapshot list, with unknown snapshots in red border.
The only policy defined:
Are they created by some automatic volume backup, or how?
I didn't have any cron jobs on the server.
What can I do to disable them? Where to find their configuration?
I appreciate any help :)
Kind regards,
Wojtek
|
AWS snapshots created without policy
|
0
Your code is missing two curly braces (}}) at the end of the code. Is it just a typo? Or is that the reason for you to be unable to extract the report?
I have tried to reproduce the issue to see if I get any exceptions while execution but it all went well as shown in below screenshot.
Can you elaborate on what you meant by "unable to extract the report". Did you receive any error while executing the code? Please provide more details in that context.
Share
Follow
answered Oct 9, 2020 at 11:21
KrishnaGKrishnaG
3,39822 gold badges77 silver badges1717 bronze badges
Add a comment
|
|
I am trying to generate backup report for previous day via Powershell but its not working .Can anyone help me on that .
Below is my Powershell Script
$ErrorActionPreference = "SilentlyContinue"
$report_object =$null
$report_object = @()
$vms = get-azvm | select Name
$acs = Get-AzRecoveryServicesVault
foreach ($ac in $acs){
Set-AzRecoveryServicesVaultContext -Vault $ac
$container_list = Get-AzRecoveryServicesBackupContainer -ContainerType AzureVM
foreach($container_list_iterator in $container_list){
$backup_item = Get-AzRecoveryServicesBackupItem -Container $container_list_iterator -WorkloadType AzureVM
|
Trying to Extract Previous Day Azure VM Report but unable to Do via Powershell
|
It's your "autobackup". Just look at your detail report. It tells you what is included in that backup piece. You'll see it is the control file and the spfile, which is what gets backed up by autobackup.
BTW, I see from that, that you have enabled the FRA. That being the case, why are you trying to direct your backups to some other location?
|
For development purposes, I have started to use RMAN to take backups of an XE database I have.
When I back the database up using RMAN, it is adding an additional item onto my backups. In the images attached, you can see that my intended backups are all tagged as XE but this additional backup item with a unique tag also appears each time. Can someone explain to me what this is for please? I am backing up the database (the extra item appears in full or incremental level 0 mode), the archive logs and the control file.
RUN
{
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CHANNEL DEVICE TYPE DISK MAXPIECESIZE 10G;
BACKUP AS COMPRESSED BACKUPSET TAG = 'XE Backup' FULL DATABASE FORMAT 'C:\Backups\%d_%D_%M_%Y\Database_%d_%U'
CURRENT CONTROLFILE
FORMAT 'C:\Backups\%d_%D_%M_%Y\ControlFile_%d_%U'
SPFILE
FORMAT 'C:\Backups\%d_%D_%M_%Y\SPFile_%d_%U'
PLUS ARCHIVELOG
FORMAT 'C:\Backups\%d_%D_%M_%Y\ArchiveLog_%d_%U';
}
CROSSCHECK BACKUP;
DELETE NOPROMPT OBSOLETE;
DELETE EXPIRED BACKUP;
LIST BACKUP SUMMARY;
Backup Summary Report
Thanks.
Backup Detail Report
|
RMAN - Extra Item Created in Backup
|
0
For calculating diffs you could use something like diff_match_patch.
You could store for each file version series of DeltaDiff.
DeltaDiff would be a tuple of one of 2 types: INSERT or DELETE.
Then you could store the series of DeltaDiff as follows:
Diff = [DeltaDiff_1, DeltaDiff_2, ... DeltaDiff_n ] = [
(INSERT, byteoffset regarding to initial file, bytes)
(DELETE, byteoffset regarding to initial file, length)
....
(....)
]
Applying the DeltaDiffs to initial file would give you the next file version, and so on, for example:
FileVersion1 + Diff1 -> FileVersion2 + Diff2 -> FileVersion3 + ....
Share
Follow
edited Oct 5, 2020 at 12:43
answered Oct 5, 2020 at 12:36
StPiereStPiere
4,1731717 silver badges2424 bronze badges
2
thank you, this is only comparing text and text files, i need to compare the difference for any type of file i believe it would be a binary comparison
– Yassine El Khanboubi
Oct 5, 2020 at 13:34
I had binary files in mind ... you could treat every binary file as text file if you wish - by working with bytes as chars.
– StPiere
Oct 5, 2020 at 18:55
Add a comment
|
|
I'm looking for any information or algorithms that allows differential file saving and merging.
To be more clear, I would like when modifying the content of a file the original file should stay the same and every modification made must be saved in a separate file (same thing as differential backup but for files), in case of accessing the file, it should reconstruct the latest version of the file using the original file and the last differential file.
What I need to do is described in the diagram below :
|
Differential file saving algorithm or tool
|
0
Yes there is! Try mysql code below.
RENAME TABLE supplier_invoice_rows_backup TO supplier_invoice_rows;
Or try oracle code.
RENAME supplier_invoice_rows_backup TO supplier_invoice_rows;
Share
Follow
answered Oct 5, 2020 at 10:54
ScottOverdriveScottOverdrive
17566 bronze badges
2
It will rename table only ? supplier_invoice_rows_backup to supplier_invoice_rows ? But I want to copy data from supplier_invoice_rows_backup to supplier_invoice_rows ?
– user14392749
Oct 5, 2020 at 10:56
There is no automated way to copy data from a table to a table. The only way would be to rename the backup table to the new table. The thing you can do is to do it manually and copy each table its data using 'mysqldump' command. Ref: link. I don't know if the command works for oracle...
– ScottOverdrive
Oct 5, 2020 at 10:57
Add a comment
|
|
I made some changes in my tables and I need to make backup of tables which I use something like
CREATE TABLE supplier_invoice_rows_backup
AS
SELECT * FROM supplier_invoice_rows
I made changes and I need to return data from supplier_invoice_rows_backup to supplier_invoice_rows
Is there any way to do this ?
|
Restore backup table data to old one
|
0
I have answered your query at - learn.microsoft.com/en-us/answers/questions/98585/… Let me know if you have any other question in this regard
Share
Follow
answered Sep 25, 2020 at 18:04
SadiqhAhmed-MSFTSadiqhAhmed-MSFT
17144 bronze badges
Add a comment
|
|
I got a large bill recently from Azure and see that Backup GRS Storage was a large chunk of that bill at 1.7TB. I see that the full VM backup had the retention period set to 180 days! So I changed it to 14. It doesn't appear as though this had an impact as the Recovery Services Vault overview page still shows Backup Storage for Cloud GRS is still at 1.7TB.
How can I view the files in this vault?
It doesn't appear as though this is linked to a separate Blob storage
account...is this vault a separate type of storage account that
doesn't use Blobs as the backing storage?
Does reducing retention period days then delete any files older than that, hence freeing up that space? (Maybe I just need to wait a bit?...its been about 15 min with no reduction in storage.)
I did see this post, but hoping after 5 years the answer is not that I need to create an entirely new vault!
|
View and Reduce Storage for Azure Recovery Services
|
Do the Following to Back up and Restore your Mautic.
Zip you Current Directory then Download
Export your Maultic Database
on Server B
Creat Db
Import your Current Db
Upload Mautic Files to the domain folder you wanna use.
change Db Connection in app/config/local.php
These should fix the ish.
|
I am a beginner in IT and Web Development.
I would like to create a backup of the Mautic installed on a hosting server “A” and restore it in another server “B”. How do I do that?
If it’s possible to automate the backup and the restoration, please tell me how to proceed.
|
Backup and restoration of Mautic
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.