Response
stringlengths 8
2k
| Instruction
stringlengths 18
2k
| Prompt
stringlengths 14
160
|
---|---|---|
I overcame with the issue by taking an image with the help of Backup failed machine's root disk VHD URI and tried launching the machine with that image and was able to take backup with no data loss.
|
I was trying to take a backup of an existing VM. But unfortunately it failed while configuring backup itself. Indeed it was actually provisioned from an backup of an existing machine which was already backed up. How come now alone I could not take a backup? The error was
Error Code: UserErrorGuestAgentStatusUnavailable
Error Message: VM agent is unable to communicate with the Azure Backup
Service.
|
Error while taking backup in Azure
|
You can just copy the files on JENKINS_HOME or for a better approach you can use
thinBackup
With thinBackup you can easily make the backup and restore.
|
Is there an easy way to backup and restore a Jenkins master (config, logs, etc)?
Is it just a case of compressing and decompressing the directory (on Centos7):
/usr/share/tomcat/.jenkins?
|
Backing up and Restoring Jenkins configuration and Logs
|
1
I believe I was making a typo assigning the role to the user, the following does indeed work:
[
{
"role" : "local_lock",
"db" : "admin",
"isBuiltin" : false,
"roles" : [ ],
"inheritedRoles" : [ ],
"privileges" : [
{
"resource" : {
"cluster" : true
},
"actions" : [
"fsync",
"unlock"
]
}
],
"inheritedPrivileges" : [
{
"resource" : {
"cluster" : true
},
"actions" : [
"fsync",
"unlock"
]
}
]
}
]
Share
Improve this answer
Follow
answered Aug 21, 2017 at 13:21
ModulusJoeModulusJoe
1,4261010 silver badges1717 bronze badges
Add a comment
|
|
We need to have a user with minimal privileges that is only able to lock a mongo instance, using db.fsyncLock() and db.unlock(), to ensure we can take consistent snapshots of the disk images. I currently have the following role created:
{
"role" : "local_lock",
"db" : "admin",
"isBuiltin" : false,
"roles" : [ ],
"inheritedRoles" : [ ],
"privileges" : [
{
"resource" : {
"cluster" : true
},
"actions" : [
"logRotate",
"resync",
"unlock"
]
}
],
"inheritedPrivileges" : [
{
"resource" : {
"cluster" : true
},
"actions" : [
"logRotate",
"resync",
"unlock"
]
}
]
}
But when I use this user to attempt a lock I receive the following:
> db.fsyncLock()
{
"ok" : 0,
"errmsg" : "not authorized on admin to execute command { fsync: 1.0, lock: true }",
"code" : 13,
"codeName" : "Unauthorized"
}
>
What other permissions are required? Mongo versions as follows:
MongoDB shell version v3.4.7
MongoDB server version: 3.4.7
|
Minimum Privileges required for Mongo lock and unlock
|
1
You can use mysqldump to back up a remote MySQL database.
Suppose your MySQL database is on a host called "dbhost". You can reach that host over the network from your new Linux host.
Run this command on your new Linux host:
$ mysqldump --single-transaction --all-databases --host dbhost > datadump.sql
(You might also need to add the --user and --password options.)
You can automate any command you can run at the command-line. Put it in a shell script. Then you an invoke the script for example from cron.
Share
Improve this answer
Follow
answered Jul 25, 2017 at 20:02
Bill KarwinBill Karwin
550k8787 gold badges681681 silver badges838838 bronze badges
3
1
Thanks, sadly I just noticed the hoster does not allow remote access to the database (which is a positive thing from a security standpoint!). Therefore, this does not fullfill my needs, but is a correct answer nonetheless. I will accept it if I dont get a more satisfying one.
– xavor
Jul 25, 2017 at 20:22
Yep, when you use commodity hosting plans, you sacrifice a lot of potential features. They have to make hosting one-size-fits-all to make it as cheap as they do.
– Bill Karwin
Jul 25, 2017 at 20:55
Yes, for my other websites I use a much more leveled solution (and I'm also using their VPS as a backup server). But in this case some stuff is already provided.
– xavor
Jul 27, 2017 at 6:31
Add a comment
|
|
I took over a website at a less-than-optimal hoster with no backups yet.
I do have an FTP-access and I know the database access parameters of the installed web-app to the MySQL server, but I don't have access to the MySQL interface or the underlying server.
I would like to do an automated backup to a Linux server under my control.
I can download all data via FTP, zip it and store it on a backed up storage.
How to do this for the database?
As an initial solution I installed phpMyAdmin and did a manual backup, but I would like to automate this process.
|
Backup SQL database from secondary linux server
|
1
'Backups' is just a GUI for 'Deja-Dup', which is a frontend for 'duplicity', the actual backend making the backups.
Long story short, the answer is no : the finest time granularity you can achieve is in days, not hours.
(See https://bugs.launchpad.net/deja-dup/+bug/479191)
If you can afford to get rid of the GUI and get back to cron to schedule duplicity backups, you will be able to choose the time of the backup.
Share
Improve this answer
Follow
answered Jul 20, 2017 at 10:22
ParadoxParadox
7381212 silver badges3030 bronze badges
1
Thank you! This clarifies a lot of problems I was having finding information about this app.
– boof
Jul 20, 2017 at 18:42
Add a comment
|
|
I would like to use the built-in application Backups in Ubuntu 16.04 to backup my system. However, it seems I can only choose to schedule a backup every day at some unlisted time. The application keeps popping up at around 4:30 PM, but that's not a good time for me to make a backup. How can I change the time of day it creates a backup to be something more reasonable, like 6:00 AM every day?
Thanks
|
How to change the time of day Backups creates a backup in Ubuntu 16.04?
|
I try,It's work.
public function actionDownload($filename = null) {
$file = $filename;
$this->updateMenuItems();
if (isset($file)) {
$sqlFile = $this->path . basename($file);
if (file_exists($sqlFile)) {
Yii::$app->response->sendFile($sqlFile);
}
//throw new HttpException(404, Yii::t('app', 'File not found'));
}
}
|
In a Yii2 project I want to download file backup. I have setup the download button in my action column. My code doesn't work, when I click the download button, it load a blank page. I need someone to help me to download the file backup.
my_controller:
public function actionDownload($file = null) {
$this->updateMenuItems();
if (isset($file)) {
$sqlFile = $this->path . basename($file);
if (file_exists($sqlFile)) {
$request = Yii::$app->getRequest();
$request->sendFile(basename($sqlFile), file_get_contents($sqlFile));
}
}
throw new HttpException(404, Yii::t('app', 'File not found'));
}
my_view, I'm using Kartik GridView:
<?php
echo kartik\grid\GridView::widget([
'id' => 'install-grid',
'export' => false,
'dataProvider' => $dataProvider,
'columns' => array(
'name',
'size:ShortSize',
'create_time',
//'modified_time:relativeTime',
[
'class' => 'kartik\grid\ActionColumn',
'template' => '{download_action}',
'header' => 'Download',
'buttons' => [
'download_action' => function ($url, $model) {
return Html::a('<span class="glyphicon glyphicon-download-alt"></span>', $url, [
'title' => Yii::t('app', 'Download Backup'), 'class' => 'download',
]);
}
],
'urlCreator' => function ($action, $model, $key, $index) {
if ($action === 'download_action') {
$url = Url::to(['backuprestore/download', 'filename' => $model['name']]);
return $url;
}
}
],
),
]);
?>
|
Yii2: How to download backup files using spanjeta/yii2-backup?
|
The easiest way would be - go to the disks properties and create a snapshot, that is if you are using managed disks.
Otherwise - use this article.
|
I've got a machine on Azure platform with Debian on it. It has some things installed and at this point I want to make a copy of it and do a few things, that may broke this installation. That's why I need a simple and fast option to rollback this machine to it's clean state.
Normally I would use snapshots, that would allow me to rollback the machine state with just one click, but it's not that simple as I thought. I've found a guide, where I had to do some complicated things with PowerShell, but I want to do it simply with new Azure portal.
I'm open to any oher methods, that would be simple and fast. I've checked backups, but it looks like there is only option to do scheduled and regular backups.
Does anyone knows an method, that would bring to me this functionality?
|
Azure virtual machine rollback
|
Create trivial script last_weekday_in_month.sh and use it in your crontab entry.
You use syntax far beyond basic shell => IMHO it is better to move it to trivial script with specific shell enforced via #!/...
12 * * 0 /path/last_weekday_in.month.sh && sudo tar -cpzf /media/BackupDisk/wwwJUNEbackup.tar.gz /var/www
last_weekday_in.month.sh
#!/bin/bash
if [ $(date +%d -d '+7 days') -lt '8' ] ; then
exit 0
else
exit 1
fi
|
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
We want to extend our backup system to include a monthly end backup. It will be performed the last Sunday in the month but code below is so that I can see it works today on a smaller scale.
Started with (which works)
0 12 * * 0 sudo tar -cpzf /media/BackupDisk/wwwJUNEbackup.tar.gz /var/www
Have trawled the internet and come up with this code, tested in script file
if [ $(date +%d -d '+7 days') -le '8' ] ; then
echo "Yes"
else
echo "No"
fi
(For reference this says - if today's date + 7 days is less then or equal to 8 then YES else NO)
But when I try to include into the Sudo's crontrab
26 17 * 6 5 [ $(date +%d -d '+7 days') -lt '8' ] && sudo tar -cpzf /media/BackupDisk/wwwJUNEbackup.tar.gz /var/www
I get a nothing.
What am I doing wrong?
|
Schedule crontab job for last sunday in the month [closed]
|
Found out what the problem was:
pg_dump -h 192.168.130.240 -p 5433 -U postgres -F c postgres > C:\Users\Marko Petričević\Documents\Radni_sati_Backup\proba
needs to be like this:
pg_dump -h 192.168.130.240 -p 5433 -U postgres -F c postgres > "C:\Users\Marko Petričević\Documents\Radni_sati_Backup\proba"
Problem was the space in path.
|
Im trying to make a backup in a folder C:\Users\Marko Petričević\Documents\Radni_sati_Backup\proba where "proba" is the name of backup file.
My command looks like this:
pg_dump -h 192.168.130.240 -p 5433 -U postgres -F c postgres > C:\Users\Marko Petričević\Documents\Radni_sati_Backup\proba
and then i get an error: " pg_dump: too many command-line arguments (first is "Petričević\Documents\Radni_sati_Backup\proba") "
But, when I write a command like:
pg_dump -h 192.168.130.240 -p 5433 -U postgres -F c postgres >C:\radni_sati_backup\radni_sati_proba
Everything works, and I get that "radni_sati_proba" file in the directory as I listed in command.
Why is this happening?
|
pg_dump: too many command-line arguments when calling from cmd
|
I couldn't find an answer where GitLab will take care of this for me so I just created another cron task:
0 3 * * * find /path/to/mounted/drive/ -mindepth 1 -maxdepth 1 -name "*_gitlab_backup.tar" -mtime +13 -delete
|
So I have GitLab installed on our server and I also followed their guide on how to setup the backups.
Goal
[Source] Create a cron task to backup the data every Tuesday - Saturday at 2:00 AM
[Source] Upload the created backup file to a Windows mounted drive
[Source] Remove backup files older than 2 weeks (14 days) on both the local server and the Windows mounted drive
So far only 2½ of my goals are achieved.
For #3, setting gitlab_rails['backup_keep_time'] = 1209600 only cleans up the files on the local server but not the uploaded files on the mounted Windows drive.
What do I need to do so that GitLab cleans both backup locations?
Additional Info
I have used the GitLab CE Omnibus installation.
Currently our version is GitLab CE 9.1.2 df1403f
|
GitLab backup auto-cleanup
|
1
A simple answer, although maybe not what you're looking for: you can make a package.json script which uses shell commands (including the PostgreSQL built-in pg_dump):
"backup-db": "pg_dump YourDatabaseName | gzip > backups/database_backup.`date +%m-%d-%Y-%H-%M`.gz",
Share
Improve this answer
Follow
answered Sep 19, 2020 at 15:42
machineghostmachineghost
34.5k3131 gold badges164164 silver badges245245 bronze badges
Add a comment
|
|
How to create full Postgresql database backup and separated table from node.js (express/loopback)?
I didn't find any solution to sole it...
Any information...
I'm interested in sql dump, because there are 2 "big" tables (~40.00. raws / ~ 30 columns) and several tables-dictionary.
|
PostgreSQL table backup from node.js(express)
|
The process you have described is for creating an image. This image can then be used to create multiple VMs. This is different than taking an already built VM and moving it to Azure as is.
1) Yes. You are provisioning a new VM from the "captured" image. You really don't want multiple servers having the same private key
2) DNS in Azure is configured in the virtual network you are deploying the image. If you have this configured correctly for your environment, this step shouldn't cause any issues
3) You will be prompted on VM creation to either provide your public key to use for root authentication or to specify a username/password you will like to use. You will have, by default, ssh to the host machine
4) It is harmless
5) Correct. I believe in a linux vm this is replaced by the server name you create, but I could be wrong.
6) It is giving you a clean machine (remember this is "from image") so it is cleaning up any accounts it might have created.
Update For Comments: Here is what Azure recommends for specialized VHDs (windows)
Specialized VHD - a specialized VHD maintains the user accounts,
applications and other state data from your original VM. If you intend
to use the VHD as-is to create a new VM, ensure the following steps
are completed.
Prepare a Windows VHD to upload to Azure. Do not generalize the VM using Sysprep.
Remove any guest virtualization tools and agents that are installed on the VM (i.e. VMware tools).
Ensure the VM is configured to pull its IP address and DNS settings via DHCP. This ensures that the server obtains an IP address
within the VNet when it starts up.
|
I am working towards my first capture of a Linux Azure VM using the capture tool.
The first step is to run sudo waagent –deprovision. Running this command does the following:
Removes SSH host keys (if Provisioning.RegenerateSshHostKeyPair is 'y' in the configuration file)
Does this mean that my private/public keys will be gone and my existing server will no longer be able to SSH into it's peers without copying these keys back in place?
Clears nameserver configuration in /etc/resolv.conf
I believe my custom defined DNS names will have to be put back in place as well.
Removes the root user's password from /etc/shadow (if Provisioning.DeleteRootPassword is 'y' in the configuration file)
Not familiar with /etc/shadow. Will I no longer have SSH access to my server?
Removes cached DHCP client leases
I'm assuming this is harmless.
Resets host name to localhost.localdomain
I believe this is only an issue if a custom hostname was setup.
Deletes the last provisioned user account (obtained from /var/lib/waagent) and associated data
Is this an account being provisioned by the capture tool itself or by me? If latter, why so?
|
Azure: VM Backup Explained
|
Unless you don't want to change other stuffs like boot or recovery, yes you can directly flast the data file of .img format. You are also suggested to erase the old data so that it doesn't create any previous leftover. So,
fastboot erase data
fastboot flash data file_name.img
would be the proper way of flashing a data file.
There a good tutorial regarding Adb and Fastboot Quick Guide
|
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
I created image of, for example, /data partition with
adb pull /dev/block/mmcblk0pXXX data.img
Can I use
fastboot flash data data.img
fastboot reboot
to restore it back? Or fastboot require specific, not just raw binary, image file format? If so, is it possible to convert my data.img to that specific format?
|
Can I flash with fastboot image created with adb pull? [closed]
|
1
copy /usr/local/android-studio and all its childs to your backup folder
also backup /home/user/.AndroidStudio*.* and /home/user/Android
Share
Improve this answer
Follow
answered Apr 23, 2017 at 15:16
Nadav TasherNadav Tasher
14766 bronze badges
Add a comment
|
|
I don't have that fast internet and need to format my PC. It's running on Ubuntu 16.04 and I want to backup the installation of Android Studio so that when i format my pc and install ubuntu again, i could use it again. Thanks in advance.
|
How do I backup Android Studio itself
|
There is no immediate access to an export of the Blog module in BC. Also the posts are saved server side in a database, so you will not find them in FTP as they are not filed that exist on the site. Both these points means it won't be simple to get them out.
The only thing you can do is put all the posts out into an RSS feed and the download the XML as a backup. Some more of this method is explained on the Adobe forums - https://forums.adobe.com/thread/1002301
|
I want to do a backup of my blogs in business catalyst, but i can only import. There is no export features.
So I want to go on the business catalyst server directly to get the files, but I can't find any of them, even when doing a 'Find all in folder' search.
Where are the blogs files ? I can actually see my blogs on the business catalyst site so they must be somewhere, right ?
Please help!
|
In Business Catalyst, where are the blogs files located?
|
Seems like you need to modify File.Copy line:
var targetPath = Path.Combine(TargetDir, file.Name)
File.Copy(file.FullName, targetPath , true);
I changed first argument from file.Name to file.FullName - this should fix the exception
|
namespace Backup
public partial class Form1 : Form
{
public Form1()
{
InitializeComponent();
}
private void btn_Backup_Click(object sender, EventArgs e)
{
List<DirectoryInfo> SourceDir = this.lbox_Sources.Items.Cast<DirectoryInfo>().ToList();
string TargetDir = this.tbox_Target.Text;
foreach (DirectoryInfo directory in SourceDir)
{
foreach (var file in directory.GetFiles())
File.Copy(file.Name, Path.Combine(TargetDir, file.Name), true);
}
}
When I try to backup it throws an exception but the file exists and is accessible. Not that good in programming so there is probably some stupid mistake :P
|
FileNotFoundException but file exists C#
|
1
From the question it seems you are simply trying to copy the csv files from a single source dir (not recursively), you should use copy not move/rename if you wish to keep the original copy in place, with a copy in dest1.
import os
source = ('C:\\Qualys Report\\Qualys Data\\')
dest1 = ('C:\\Qualys Report\\Backup\\')
for filename in os.listdir(source):
if filename.endswith('.csv'):
shutil.copy(source+filename, dest1)
Share
Improve this answer
Follow
edited Mar 24, 2017 at 18:03
answered Mar 24, 2017 at 17:57
Steve HopeSteve Hope
25411 silver badge44 bronze badges
Add a comment
|
|
source = ('C:\\Qualys Report\\Qualys Data\\')
dest1 = ('C:\\Qualys Report\\Backup\\')
for filename in os.listdir(source):
if filename.endswith('.csv'):
shutil.move(source+filename, dest1)
For some reason its moving the folder and csv file i have into the backup folder
Anyway i can just move the file itself?
|
How to copy just the files in a folder in python3?
|
Yes, it is possible. There is not a direct way but needs some additional tweaking in the Data-Pipeline end. You are required to understand how Data-Pipeline actually runs your export job by default.
When you click on export button on DDB console, it takes you to Data-Pipelines console to create a Pipeline for the export.
After filling out the template, instead of running, you can use Edit in Architect feature to alter the current template which only works with one table.
On the architect page, if you observe the Activities section ,you will find EmrAcvity running a EMR STEP using the following param's . This EMR STEP will run the export job using parameters that you initially passed on the template. Note that it will also RunsOn EMRclusterforBackup resource which you can find in resource section.
s3://dynamodb-emr-#{myDDBRegion}/emr-ddb-storage-handler/2.1.0/emr-ddb-2.1.0.jar,org.apache.hadoop.dynamodb.tools.DynamoDbExport,#{output.directoryPath},#{input.tableName},#{input.readThroughputPercent}
To run export on other DDB tables using same EMR resource, you simply need to create another EMRActivity object by clicking Add and then add EMRActivity on architect. On this activity , you can use the same RunsOn as previous activity is using and in the STEP param's you can manually edit to to include other table name and its export path
like
s3://dynamodb-emr-#{myDDBRegion}/emr-ddb-storage-handler/2.1.0/emr-ddb-2.1.0.jar,org.apache.hadoop.dynamodb.tools.DynamoDbExport,s3://myexport-bucket/table2/,table2,0.9
You can extend it for multiple tables.
Note: This can easily be done for multiple tables using a JSON file as Data-Pipeline definition , editing it to add more activities and parameters and then exporting it to Run later.
|
When I set up a re-occuring backup via the export function in the DynamoDB console, the task it creates automatically creates a new EMR cluster when it runs. Some of my tables need to be backed up but are fairly small. What I end up with is a huge number of large servers running to back up some relatively small tables. Is there any easy way to chain these tasks to run on one server group in series or parallel?
|
Is there a way to group my DynamoDB export tasks on one EMR cluster?
|
I have no idea is it right or not.
If you have no idea, check the documentation of both program products. Some antiviruses may block other software from directly accessing your hard drive, as this action was popular in viruses to hide from OS and antiviruses. But popular backup solutions are known and probably marked as trusted in normal antiviruses.
For example, Acronis has instruction how to mark its backup solutions as trusted (or excluded) in several antiviruses: https://kb.acronis.com/content/46430
|
This is a general question. I have heard from some people that it is not ok to make a backup of Windows with antivirus already installed in it. I have no idea is it right or not. I want to make a backup of my Windows 10 with a 3 party software like Acronis. I would really appreciate if someone can clarify is it ok or not to make a backup with an antivirus ( for example 360 Total Security ) already installed in the Laptop.
|
Backup Windows 10 with an Antivirous
|
Your date expression seems to misbehave inside the arithmetic context. Adding temporary variables solved your issue for me :
#!/bin/bash
echo > /home/alpha/folder/keep.txt
#writing dates of the backups that should be kept to the array
for i in {0..7}; do ((keep[$(date +%Y%m%d -d "-$i day")]++)); done
for i in {0..4}; do ((keep[$(date +%Y%m%d -d "sunday-$((i+1)) week")]++)); done
for i in {0..12}; do
DW=$(($(date +%-W)-$(date -d $(date -d "$(date +%Y-%m-15) -$i month" +%Y-%m-01) +%-W)))
begin=$(date -d "$(date +%Y-%m-15) -$i month" +%Y)
for (( AY=begin; AY < $(date +%Y); AY++ )); do
((DW+=$(date -d $AY-12-31 +%W)))
done
((keep[$(date +%Y%m%d -d "sunday-$DW weeks")]++))
done
for i in {0..30}; do
DW=$(date +%-W)
begin=$(($(date +%Y)-i))
for (( AY=begin; AY < $(date +%Y); AY++ )); do
((DW+=$(date -d $AY-12-31 +%W)))
done
((keep[$(date +%Y%m%d -d "sunday-$DW weeks")]++))
done
#writing the array to file keep.txt line by line
for i in ${!keep[@]}; do echo $i >> /home/alpha/folder/keep.txt; done
#delete all files that not mentioned in keep.txt
cd /home/alpha/folder
ls -1 /home/alpha/folder/ | sort /home/alpha/folder/keep.txt /home/alpha/folder/keep.txt - | uniq -u | xargs rm -rf
rm /home/alpha/folder/keep.txt
However, I am unsure why the expression misbehaves inside the arithmetic block.
|
I am having problem finding error on bash script for handling backup:daily, monthly, yearly. Here is the script:
#!/bin/bash
echo > /home/alpha/folder/keep.txt
#writing dates of the backups that should be kept to the array
for i in {0..7}; do ((keep[$(date +%Y%m%d -d "-$i day")]++)); done
for i in {0..4}; do ((keep[$(date +%Y%m%d -d "sunday-$((i+1)) week")]++)); done
for i in {0..12}; do
DW=$(($(date +%-W)-$(date -d $(date -d "$(date +%Y-%m-15) -$i month" +%Y-%m-01) +%-W)))
for (( AY=$(date -d "$(date +%Y-%m-15) -$i month" +%Y); AY < $(date +%Y); AY++ )); do
((DW+=$(date -d $AY-12-31 +%W)))
done
((keep[$(date +%Y%m%d -d "sunday-$DW weeks")]++))
done
for i in {0..30}; do
DW=$(date +%-W)
for (( AY=$(($(date +%Y)-i)); AY < $(date +%Y); AY++ )); do
((DW+=$(date -d $AY-12-31 +%W)))
done
((keep[$(date +%Y%m%d -d "sunday-$DW weeks")]++))
done
#writing the array to file keep.txt line by line
for i in ${!keep[@]}; do echo $i >> /home/alpha/folder/keep.txt; done
#delete all files that not mentioned in keep.txt
cd /home/alpha/folder
ls -1 /home/alpha/folder/ | sort /home/alpha/folder/keep.txt /home/alpha/folder/keep.txt - | uniq -u | xargs rm -rf
rm /home/alpha/folder/keep.txt
When I try to run the script, throws error message:
./back.sh: line 12: syntax error near unexpected token `newline' ./back.sh: line 12: ` done'
Where did I do wrong on the script?
|
Error on backup bash script: syntax error near unexpected token `newline'
|
1
rm_if_not_on_1st() {
[ "$(stat -c %y "$1" | cut -c9-10)" = "01" ] || rm "$1"
}
export -f rm_if_not_on_1st
find /my/backup/path/* -mtime +30 -exec bash -c 'rm_if_not_on_1st "$1"' _ {} \;
Share
Improve this answer
Follow
edited Mar 6, 2017 at 12:50
answered Mar 6, 2017 at 11:22
AlfeAlfe
57.7k2020 gold badges110110 silver badges165165 bronze badges
1
1
This should work with any valid file name. {} is passed as a true parameter to the shell, not dynamically inserted into the command.
– chepner
Mar 6, 2017 at 12:48
Add a comment
|
|
I have this in a cron job to remove databases older than 30 days:
find /my/backup/path/* -mtime +30 -exec rm {} \;
How can I modify this to only delete the files if the backup was not taken on the first of the month?
E.G. I want have a daily backup of databases (for one month only) PLUS a backup for each month:
Jan/1 backup
Feb/1 backup
Mar/6 backups (as it's currently 6th March)
Any ideas how I can do this?
|
Delete older than 30 days if not 1st of month
|
1
If you are comfortable with Snapshotter then you can use Snapshotter with docker also. just mount cassandra docker volume in host somewhere and take backup as usual with Snapshotter.
you can mount docker Cassandra directory /var/lib/cassandra to /opt/cassandra on host system using command.
docker run -d--name web -v /opt/cassandra:/var/lib/cassandra cassandra_container
After that you can take incremental backup of /opt/cassandra using snapshotter.
OR
If you planning to explore other docker volume plugin then visit available docker volume plugins
Share
Improve this answer
Follow
edited Feb 20, 2017 at 14:50
answered Feb 20, 2017 at 14:42
pl_rockpl_rock
14.8k33 gold badges3030 silver badges3333 bronze badges
Add a comment
|
|
We have cassandra cluster with 3 nodes running in our environment in docker container. Earlier we used snapshotter but as we have recently migrated it to docker how can we achieve the backup of cassandra. Is there any way to take the incremental backups.
Thanks in Advance.
Kiran Kumar
|
How to backup the cassandra running in docker container
|
As BM_ARCHIVE_PREFIX doesn't help, we can ask backup-manager to generate the backup file but not to upload it. We can write a script to upload the file and can specify whatever address we want in the script. That's the only feasible solution.
|
I am using backup-manager to back up a directory's content say /home.
I want to upload the archive file to s3 bucket.
I have a directory structure in s3 bucket say /bucket_name/x/y/
If I write export BM_UPLOAD_S3_DESTINATION="bucket_name", the archive will be uploaded to bucket_name/
If I write in export BM_UPLOAD_S3_DESTINATION="bucket_name/x/y/", then error occurs that bucket name should not be like IP addresses.
But I want the archive to be uploaded to bucket_name/x/y/.
How can I achieve this?
|
Backup-manager archive name when uploading to s3
|
~ won't be expanded to your home directory when it's in quotes. Leave it (and the following /) unquoted, like this:
BACKUP_DIR=~/"Documents/backups/"
Also, it's safest to use lowercase or mixed case for variable names so you don't accidentally use a variable name that has special meaning to the shell or other programs (using $PATH is the classic example).
|
In the following bash backup script:
PROJECT="testPrj"
BACKUP_DIR="~/Documents/backups/"
BACKUP_FILES="./*.sh ./*.h ./*.hpp ./*.c ./*.cc ./*.cpp ./*.md ./*.txt ./BUILD"
BACKUP_TIME=_`date +%Y%m%d_%H%M`
BACKUP_FILENAME=$BACKUP_DIR$PROJECT$BACKUP_TIME.tar.bz2
mkdir -p $BACKUP_DIR
echo "Created backup directory:" $BACKUP_DIR
echo $BACKUP_FILENAME
tar -cpjf $BACKUP_FILENAME $BACKUP_FILES
This is the output:
Created backup directory: ~/Documents/backups/
~/Documents/backups/testPrj_20170206_1609.tar.bz2
I get the compressed file in the wrong path. Instead of being:
~/Documents/backups/
it goes in: \~/Documents/backups/
This destination directory effectively exists, and it is in the local path.
Running mkdir on its own from the command line creates the directory in the right place.
|
Writing files in the wrong path with bash
|
esptool.py has a (undocumented) read_flash option which you can use to read the firmware from 0x0000 back to a local file.
$ esptool.py read_flash
usage: esptool read_flash [-h] [--no-progress] address size filename
esptool read_flash: error: too few arguments
|
How do I backup my NodeMCU firmware before upgrading?
Note: I am a completely newbie at this. I have never worked with a NodeMCU before. I have other programming skills, so programming is not new to me.
|
How to backup NodeMCU firmware?
|
Please see my above answer as to how i have listed all .zip files in a given directory, order it by filename and used as a function so i can easily repeat the use elsewhere.
|
I thought i would share something useful, so i wanted to list only .zip files in a directory (to minimise what i am displaying in PHP for security) so i have used the below script with the newest files on top.
This is just some code i wanted to share incase anyone else needed to do something similar.
<?php
function list_zipfiles($mydirectory, 1) {
// directory we want to scan
$dircontents = scandir($mydirectory);
// list the contents
echo '<ul>';
foreach ($dircontents as $file) {
$extension = pathinfo($file, PATHINFO_EXTENSION);
if ($extension == 'zip') {
echo "<li>$file </li>";
}
}
echo '</ul>';
}
?>
<h6 class="header1">PRODUCTION</h6>
<hr class="style1">
<?php
call_user_func('list_zipfiles', "backups/db1");
?>
The 1 below sets the listing order of the files, changing it to 0 orders it in the other direction:
function list_zipfiles($mydirectory, 1) {
The output is as below:
|
List files from directory with .zip extension and sort by newest created file first
|
It seems the problem affects only x86_64 emulators, on x86 API 24 emulator local transport is availabe.
|
I want to test backuping of my application, but backup transport isn't available on the emulator. I tried two emulators - Android API version 24 and 25, both with Google API support. When I execute bmgr list transports, bmgr answers: No transports available. When I execute the same command on my device and emulators with lower api version, bmgr says that local transport is avaiable.
|
No backup transports available on android emulator with api level 24+
|
You probably have used the same database for both. Create another database and user for dev.cookies.com and use that database and user to create the new dev site. Check in the configuration.php file that these values should be different.
public $db = 'joomla';
public $password = 'db_password';
public $user = 'root';
These are all database values. Also check these settings
public $log_path = 'C:/xampp/htdocs/joomla/administrator/logs';
public $tmp_path = 'C:/xampp/htdocs/joomla/tmp';
|
I'm not a pro with server management, I mainly do web design on Joomla.
Our IT Manager recently left and I was given access to everything since no-one else in our company knows web.
We have a main site. In this example I'll name the site as cookies.com for example sake.
www.cookies.com is the main site. The domain is cookies.com only. We have 3 server accounts for this.
123reg - to buy the domain and renew
heartinterner - to host the website
1and1 - this is where it is redirected (or something), well all the files are found in virtual servers here.
When I go to the virtual servers, there are a couple of directories, and eventually I can find a folder called "var" which leads me to another directory. I go to "www" and I find so many site folders there, including cookies.com and dev.cookies.com and more
My objective was to backup and restore cookies.com to dev.cookies.com so that I can work on it and replace the original cookies.com later on. I use Akeeba Backup for this.
The issue is, as soon as I started making changes to dev.cookies.com, cookies.com also made those changes.
I'm very confused- what's going on here? Is there something written in the files that direct the database or whatever to the same place, for which reason the changes occur together?
Sorry this is very confusing. Please suggest if you can what I can do. I know this might not be much information.
Thank you!
|
Why does my Joomla site subdomain copy the main site?
|
Use --format plain instead of custom one. The latter is designed to work exclusively with pg_restore. Plain format also allows you to take a look into dumped data with text editor and verify if that's what you want.
However my quick test shows that it's also possible to append data with pg_restore and custom format data-only dump:
pg_restore -d db_production --data-only t1.backup
|
I’m using Postgres 9.5 on Mac Sierra. I want to export some table data from my local machine and import that into a Postgres 9.5 database on a Linux machine. Note I don’t want to blow away the data on the Linux machine, only add the my local machine table rows to the rows that already exist on the tables in the Linux environment (and ignore duplicates). So on my local machine, I ran
pg_dump -U myusername --format custom --section data --inserts --file "t1.backup" --table "table1" --table "table2" --table "addresses" "mydb"
However on my remote machine, when I try and import the file, I get the error
myuser@remote-machine:/tmp$ psql db_production < t1.backup
The input is a PostgreSQL custom-format dump.
Use the pg_restore command-line client to restore this dump to a database.
But I don’t want to use pg_restore because I don’t want to erase the existing table data, I simply want to add to it. How can I achieve this?
|
How do I export some data my local Postgres db and import it to a remote without deleting all the data from the remote?
|
I figured it out. You need to rename the folder under WindowsImageBackup, wich is your pc name.
When restoring an image you'll get a list of all different images.
|
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I'm making backup images of my windows 10 (version 1607) system via "Control Panel \ File History \ System Image Backup"
I save the image to an external hd but the images are overwritten in the WindowsImageBackup folder.
Is there a way to save different images, let's say i want an image "fresh install", an image "fresh install with tools" and an image "fresh install with dev tools"?
So if an installation isn't right for me, or my system gets messed up, i can restore a previous image?
thanks in advance
|
Multiple backup images on windows 10 [closed]
|
The closest built-in option is the NSPersistentStoreCoordinator method migratePersistentStore:toURL:options:withType:error:. It takes an existing persistent store and saves it in a new location. (Note that this method has nothing to do with migrating to newer versions of the data model). However, when this method completes, the old persistent store is removed from the persistent store coordinator and can't be used unless you re-add it.
Another option is to change the journal mode. Recent OS releases have used write-ahead logging, but the older "delete" mode is still supported. In that case you could simply copy the persistent store file, using NSFileManager methods. This is described in Apple's Technical Q&A QA1809. If you do that and you use Core Data's external binary support, you need to find and copy the directory used for the binary blobs.
|
I am going to use CoreData in one macOS application in order to manipulate about 100 MB that changes every second, the size should not increase significatively.
The relational nature of CoreData is exactly what I need.
I have to be very careful in order to not lose any data so I would like to create some physical file that I can store as backup.
Does CoreData has already an helper function to do this or have I got to write it myself?
|
Backup into physical file using CoreData
|
1
The cd $DIR seems strange; if the first entry found by /home/company_folder/company_applications/* is a directory it will change to that directory; if it is a file (or company_applications is empty) it will get an error.
Perhaps everything is running correctly except that because of the above your ls -l is not running in the directory you expect? Try removing the cd and changing it to ls -l $DIR.
It also seems very strange to me that you are zipping up content from a backup directory into an applications directory. Perhaps you meant to be doing:
zip -r "$BACKUPDIR/`basename $i`-$NOW" $i
Share
Improve this answer
Follow
answered Oct 5, 2016 at 10:44
ysthysth
97k66 gold badges122122 silver badges215215 bronze badges
Add a comment
|
|
Name of a script - backup_script.sh
Location of a script on server - /home/company_folder/company_site_backups
Line added to the cron file:
@monthly /home/company_folder/company_site_backups/backup_script.sh
#!/bin/bash
DIR="/home/company_folder/company_applications/*"
BACKUPDIR="/home/company_folder/company_site_backups"
NOW=`date +\%Y\%m\%d`
cd $DIR
for i in $DIR; do zip -r "${i%/}.zip" "$BACKUPDIR/$i-$NOW"; done
ls -l
echo "Done!"
But unfortunately my script does not work properly. Actually. It does not run at all! I do not see any errors in the syntax.
Does anyone know how to fix it?
|
Server shell backup script (bash)
|
You can find backup instructions here:
https://confluence.atlassian.com/cloud/cancelations-744721616.html
I believe you will lose access but the data will still exist for 2 weeks, so you can reactivate:
"Once your site has been deactivated (i.e. your site has been taken offline), you have two weeks to pay your outstanding quote or contact Atlassian to have the site restored before your data will be permanently deleted. Note that data backups for permanently deleted instances can sometimes be retrieved by raising a ticket with our Support team within the first month after your instance has been deleted." - https://confluence.atlassian.com/cloud/billing-and-user-count-744721614.html
|
What if I stop paying Jira, would I lose whole my backlog and other achievements of my team or it just would be frozen?
And is there any way to backup all the data of account in the Jira?
Thank you!
|
Stop paying Jira account
|
Meanwhile, found a way. I run the Win10 build-in tool diskpart twice, using command clean (carefully!). After that, I could initialize, format and partiton the drive "as usual" with the Win Disk Management. With the drive having 4 TB memory, I had to use GPT as partition style to access it fully; with MBR style not all of the disk space could be accessed.
|
I have recently made a backup of Windows 10 using Macrium Reflect v6.1; the backup consists of an image written to an external hard drive, following these "widely used" instructions:
http://www.everydaylinuxuser.com/2015/11/how-to-backup-windows-10-safe-way-with.html
For some reason I would now like to remove the image from the hard drive and format it, so as to be empty and consisting of only one partition again. In Win10, I tried to do so with the Disk Management, but to no avail since the partitions on the hard disc cannot be accessed/reformatted. Is there a Macrium build-in tool to remove images or any other tool to be recommended for this task? Thanks in advance ...
|
delete macrium reflect backup image
|
You will need to look out for unsupported features which are present in Enterprise and not in Standard...
for Example,Partitioning is available in Enterprise,but not in Standard.so you will need to restore database and remove partitioning and then take a backup for it to work in Standard
Below DMV will provide list of Enterprise only features which will not work in standard..
SELECT * FROM sys.dm_db_persisted_sku_features
References:
https://dba.stackexchange.com/questions/84456/downgrade-sql-server-enterprise-edition-to-sql-server-standard-edition
https://blogs.technet.microsoft.com/sqlman/2011/03/25/sql-server-standard-vs-enterprise-edition/
|
I did a backup of my db on SQL Server Datacenter Edition and I need to Roll it up on SQL Server Standard. Is that possible? If so, what pitfalls are worth considering?
Thank you in advance.
|
SQL Server Datacenter Backup compatibility
|
1
I recommend you to Zip the sdk and move to another partition like D: E: F: , Now after you successfully installed your windows, Install Android Studio, Extract the sdk.zip you made previously and select the sdk folder you just extracted. You need internet connection for the first time while you create a project. It will download some jar otherwise you will get gradle error!!
Share
Improve this answer
Follow
answered Aug 15, 2016 at 14:44
Nikesh MaharjanNikesh Maharjan
18333 silver badges2020 bronze badges
Add a comment
|
|
For some reasons I need to reinstall my Windows 7 OS.
I have android-sdk installed at: C:\Users\user_name\AppData\Local\Android\android-sdk and the entire directory weighs 2.5 GB.
What is the best way to back up the 2.5 GB and use after I reinstall my OS ? Using backed up sdk would save a considerable amount of time.
Please help.
|
Backing up installed android-sdk
|
1
How differential backup works..?
Whenever we take a differential backup,it copies all the changes which occured from last full backup,Not from Last differential backup..
That is the reason why you are seeing "actual sizes are hugely different and can be half the size of the full backup after only a couple of days"
Share
Improve this answer
Follow
edited Aug 4, 2016 at 8:32
answered Aug 4, 2016 at 8:26
TheGameiswarTheGameiswar
28.4k99 gold badges5959 silver badges9898 bronze badges
2
Hi. Thanks for the response. Yes, I understand what you are saying. Each differential backup gets it's own file and I would expect that each file gets a bit bigger than the last. But it doesn't explain why the amount of data changing is a few Mb but the differential backup file is several Gb. Doug.
– Doug
Aug 4, 2016 at 9:26
Backup contains datafiles and ldf required to bring the data back to cosistency..So in your case there might be transactions which are happening on day to day basis and rolling back ,but not happening to full backup
– TheGameiswar
Aug 4, 2016 at 9:53
Add a comment
|
|
We have several SQL Server 2008 R2 databases for which we perform a full backup every Sunday then differential backups Monday to Saturday. We also do transaction log backups every 10 minutes.
The first differential backup on Monday is usually quite small, but Tuesday to Saturday are much larger but similar in size to each other.
I used some scripts I found which predict the differential backup size, e.g. https://dougzuck.com/sql-differential-backup-size-prediction and http://www.sqlskills.com/blogs/paul/new-script-how-much-of-the-database-has-changed-since-the-last-full-backup/, and they predict a very much smaller backup size.
Examples are:
database1, full backup size 5Gb, diff size 3.5Gb, predicted diff size 84Mb
database2, full backup size 40Gb, diff size 1Gb, predicted diff size 17Mb
As you can see, the actual sizes are hugely different and can be half the size of the full backup after only a couple of days.
I know users aren't creating or modifying the actual data to any great extent. As far as I can tell, there are no index rebuilds or other management tasks happening between the full and differential backups.
It's like something is happening on Monday which causes the Tuesday onward differentials to be huge. Backup compression is not used.
Any ideas?? Thanks in advance.
Doug
|
SQL Server 2008 R2 differential backups much larger than expected
|
1
I wrote an application to do this, it is available on github:
https://github.com/freedev/solr-import-export-json
The idea behind the code is simple, when you query a collection using the Solrj, even an entire collection, it returns a stream documents (i.e. SolrDocument).
And SolrDocument implements both Map<String,Object>, Serializable.
So I thought I could serialise the documents in json. Well, take a look at the repo, it works pretty well.
Share
Improve this answer
Follow
edited Jan 19, 2017 at 16:05
answered Jan 19, 2017 at 15:58
freedevfreedev
27.7k1010 gold badges114114 silver badges135135 bronze badges
Add a comment
|
|
I am trying to restore backup (migrate) index from Solr 6.0 to Solr 6.1. However, when I follow the steps on https://cwiki.apache.org/confluence/display/solr/Making+and+Restoring+Backups , I get an Exception Error saying it failed when I use
curl -XGET http://localhost:8983/solr/mycollection/replication?command=restorestatus
command to check the status.
The actual response:
<?xml version="1.0" encoding="UTF-8"?>
<response>
<lst name="responseHeader"><int name="status">0</int><int name="QTime">0</int></lst><lst name="restorestatus"><str name="snapshotName">snapshot.20160802100744911</str><str name="status">failed</str><str name="exception">org.apache.solr.common.SolrException: Exception while restoring the backup index</str></lst>
</response>
I was thinking it is possible to restore backup made from a previous version of Solr to a newer one, but am I wrong? Any help will be greatly appreciated.
|
Is it possible to restore backup from Solr 6.0 to Solr 6.1?
|
You passed NULL as the last parameter to BackupRead, which is clearly invalid acording to the docs.
lpContext [out] Pointer to a variable that receives a pointer to an
internal data structure used by BackupRead to maintain context
information during a backup operation. You must set the variable
pointed to by lpContext to NULL before the first call to BackupRead
for the specified file or directory. The function allocates memory for
the data structure, and then sets the variable to point to that
structure. You must not change lpContext or the variable that it
points to between calls to BackupRead. To release the memory used by
the data structure, call BackupRead with the bAbort parameter set to
TRUE when the backup operation is complete.
You should pass a pointer to a pointer to it, which points to null, not a null value. (LPVOID is void*, LPVOID** means void**)
Same with numberofbytedsreadinreadFile: you should pass a parameter to an existing variable, not a null pointer, it is an out parameter.
void* backupContext = NULL;
int numberOfBytesRead = 0;
cout << "Point Of Crash" << endl;
if (!BackupRead(
source,
&buff,
numberOfBytesToRead,
&numberOfBytesRead,
FALSE,
TRUE,
&backupContext
))
You should also return from this method if you got an invalid handle instead of continuing.
|
The Below is my C++ code I tried to back up a file, including the security information. I used Backup read but whenever the code is called the exe is getting crashed.
char buff[225280];
DWORD numberOfBytesToRead = 225280;
DWORD dwBytesRead=0, dwBytesWritten, dwBytesRead2=0;
BOOL bProcessSecurity = TRUE;
LPWSTR sourceBackupFile = L"E:\\myFolder\\backup.txt";
HANDLE source = CreateFile(sourceBackupFile, GENERIC_READ | GENERIC_WRITE, 0, NULL, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, NULL);
// Check for errors
if (source == INVALID_HANDLE_VALUE) {
cout<<"The Handle is Invalid:"<<GetLastError()<<endl;
}
else
{
cout<< "\n The source file is in E:\\myFolder\\backup.txt" <<endl ;
}
LPDWORD numberofbytedsreadinreadFile = 0;
cout << "Point Of Crash" << endl;
if (!BackupRead(
source,
&buff,
numberOfBytesToRead,
numberofbytedsreadinreadFile,
FALSE,
TRUE,
NULL
))
{
cout << "Backup Read Failed with the error::" << GetLastError() << endl ;
}
It prints this before crashing
The source file is in E:\\myFolder\\backup.txt
"Point of Crash"
|
Exe crashes in BackupRead Windows function
|
When an index is been restored, it's closed, which means you cannot index any documents into it.
|
I am restoring an Index in ES, and indexing more docs of the same type in the same Index, how ES behaves?
Is there any performance impact?
If I am restoring documents with the same ID that are being referenced? What happens?
Any happens before relationship I should care about?
|
When I Restore in Elasticsearch 2.2 and Index more docs at the same time, how ES Behaves?
|
1
First, I think you should plug your thumb drive into another computer to confirm that your thumb drive is available. And if it is normal, there is something wrong with the Windows 10 system image tool. I recommend a perfect backup software - AOMEI Backupper which can create a system image of an entire Windows 10 installation. You can just download this freeware, install and launch it. Then you follow the wizard of System Backup function of AOMEI Backupper. What need to pay attention to is you must plug your thumb drive into your computer before launch AOMEI Backupper, just to confirming the device can be detected.
Share
Improve this answer
Follow
answered Aug 1, 2016 at 7:36
summerdaysummerday
1111 bronze badge
1
Just one thing for confirmation. While creating the system image in the thumb drive, does AOMEI delete existing files in the thumb drive or does it retain them?
– priyamtheone
Aug 10, 2023 at 16:17
Add a comment
|
|
Fellow Forum Members,
How does one create a System Image of an entire Windows10 installation onto a 128GB thumb drive? I tried doing it through the Windows 10 System Image tool but it sees a thumb drive as an invalid storage device. Then I Googled the subject and learned I need to convert the thumb drive to a Local Disk. Can anyone out there recommend the best app to use to convert a thumb drive to a Local Disk?
Another option I learned about is to backup to a network location and convert the thumb drive to a network drive. However, I am unable to find any info that will show me how to convert my thumb drive to a network drive.
My goal is to install my thumb drive and transfer it over to my blank SSD drive if I ever have to rebuild it in the future.
Any info will be greatly appreciated and thank you in advance.
|
Windows 10 Image Backup to Thumb Drive
|
1
Can you try
tar cvzf /NAS/for_tape/FILESERVER.tgz `find /NAS/Backup_FILESERVER/ -type d -exec sh -c "ls -1rt" \; | tail -2 | head -n 1`
find command with ls -1rt sorts the files based on the modification time and reverses it.
You can confirm if the command find /NAS/Backup_FILESERVER/ -type d -exec sh -c "ls -1rt" \; | tail -2 | head -n 1 gives the folder you need before starting the compression
Share
Improve this answer
Follow
edited Jun 23, 2016 at 12:42
answered Jun 23, 2016 at 7:28
InianInian
82.5k1414 gold badges147147 silver badges169169 bronze badges
9
Thank you for your suggestion, but when I run this command, nothing happens. I waited for about 5 minutes, but no archive was created.
– Archer
Jun 23, 2016 at 7:41
@RickySpanish: find /NAS/Backup_FILESERVER/ -type d -printf "%p\n" is not returning any folders? Are you sure the folder has contents?
– Inian
Jun 23, 2016 at 8:19
no folder was returned. I'm pretty sure, there are 5 folders in this directory. :/ Without the commands after the pipe, I get the content of the oldest folder.
– Archer
Jun 23, 2016 at 9:06
@RickySpanish: Can you try just -print as find /NAS/Backup_FILESERVER/ -type d -print" without sort options. This should return the folders, else the folder you are searching for is empty.
– Inian
Jun 23, 2016 at 9:13
Sorry, I didn't saw your Answer. With this comment, I also get the content of the oldest folder. Maybe the content of all folders in /NAS/Backup_FILESERVER/
– Archer
Jun 23, 2016 at 11:22
|
Show 4 more comments
|
I've tried to develop a little backup shellscript.
When I Run it int the Backup_FILESERVER Folder it's creating the tgz.
But from /root I get an Error.
tar cvfz /NAS/for_tape/FILESERVER.tgz /NAS/Backup_FILESERVER/`ls -Art | tail -2 | head -n 1`
Error:
tar: Tuesday: Cannot stat: No such file or directory
tar: Exiting with failure status due to previous errors
In the folder "/NAS/Backup_FILESERVER" are 5 folders for each weekday. Monday, Tuesday, ...
Is it possible to make it runable?
|
Shellscript 2nd newest folder in tgz
|
Yes, you can just copy your Firefox/Iceweasel profile over. For Firefox your profiles are in $HOME/.mozilla/firefox, and it's similar for Iceweasel.
|
I have destroyed my Debian Jessie installation and I need to reinstall it. I want to back up my Iceweasel passwords and bookmarks, but I can't start the desktop environment anymore, so I have to do it from the command line. Will it work if I just copy the iceweasel directories and paste them into my new installation? If not, is there another way? I don't want to take any chances, so I'm asking here.
|
Backup Iceweasel Bookmarks and Passwords from CL
|
If you look at the output of mysqldump (before you gzip it) you will see that it contains a sequence of
DROP TABLE x;
CREATE TABLE x (...);
INSERT INTO x (...) VALUES (...);
So, no, it does not do an insert / replace, it drops and recreates the tables.
|
I'm migrating a MYSQL DB from one host to another so I run the following command to backup the DB from the old hosting:
mysqldump -u **** -p **** | gzip > /home/***/***.sql.gz
And then use the following command to import the DB to the new host:
zcat /home/***/***.sql.gz | mysql -u *** -p ***
After successfully importing the DB, I point the domain to the new DNS.
The problem is that the website is active so new records are very likely to get inserted after the last backup. So, I may need to run the command once again after full DNS propagation.
So, my question, does the mysql command insert the new rows and update the existing ones or does it actually totally drop the tables and start over with the backup? If that happens, the records that have been inserted after DNS propagation might get lost!
Thanks
|
Migration of MYSQL database without losing records
|
you can use built-in Linux command date to name directory as you want,
for example
xtrabackup --backup --target-dir=/data/backups/inc`date +%Y%m%d` (rest options)
|
It says on the manual that if you want to create an incremental backup you can do it with the following command:
xtrabackup --backup --target-dir=/data/backups/inc1 \
--incremental-basedir=/data/backups/base --datadir=/var/lib/mysql/
where /data/backups/inc1 is the incremental directory. So now if I want to create a cronjob (which I don't think I'm the only one), I have to figure out a way to name my directory every time I want to create a new incremental backup, which could be tedious.
Is there any way to maje xtrabackup to create directories using timestamps instead?
|
how can I create incremental backups with xtrabackup automatically
|
1
GCE has added a new level of abstraction. The disks were separated from the VM instance. This allows you to attach a disk to several instances or restore snapshots to another VMs.
In case your VM or disk become corrupt, the snapshots are safely stored elsewhere. As for additional costs - keep in mind that snapshots store only files that changed since the last snapshot. Therefore the space needed for 7 snapshots is often not more than 30% more space than one snapshot. You will be charged for the space they use, but the costs are quite low from what i observed (i was charged 0.09$ for 3.5 GB snapshot during one month).
Share
Improve this answer
Follow
answered Jan 2, 2017 at 10:07
bulgarubulgaru
11644 bronze badges
Add a comment
|
|
I've made two snapshots using the GCE console. I can see them there on the console but cannot find them on my disks. Where are they stored? If something should corrupt one of my persistent disk, will the snapshots still be available? If they're not stored on the persistent disk, will I be charged extra for snapshot storage?
|
On a google compute engine (GCE), where are snapshots stored?
|
The default save folder for 3dsmax is located under:
C:\Users\username\Documents\3dsMax\*
Make sure to check there.
Other then that folder - we cannot know where you saved things.
So make sure to check that folder.
|
I'm a trainee working in a company using 3dsMax Design 2015. Because of bugs (software cannot be opened anymore) the admin will try to reinstall it. I have to save datas (I think of 3d models) but 3ds is so huge I fear I will forgot something.
(A lot of things are saved on a server, but we want to be sure we don't lost work by mistake)
I already ;
did a zipped copy of 3dsmax folder (just in case)
look in %appData% and found nothing but error reports
Can you help me and tell me where to look at, please?
Best regards
|
3dsMax reinstall ; I fear to lost 3d models ; where to look to save?
|
You can take hourly EBS snapshots or you can create a process to copy your data to S3 hourly. What is "economical" is entirely subjective and you would need to provide more information about your requirements before I could answer that completely.
EBS snapshots are incremental, so if you created a snapshot hourly each snapshot would only be backing up the changes that had been written to the volume in the last hour. So from the perspective of the storage space you would be paying for, that would be a fairly economical way to provide hourly backups of EC2 instances.
|
I’m looking for the best practices for taking hourly backups of my instances using AWS environment. I’m using both Ubuntu and Amazon Linux instances as web server without any panels used.
Hourly snapshot is economic ?
|
Hourly backups for AWS instances
|
1
Deleted documents (and remember that an update is a delete + an add internally) are not removed before optimize is called on the index or the mergeFactor is hit. This causes the index files to be rewritten to disk, and any deleted content is expunged.
After the index files have been rewritten, the old files are removed and the new index files does not contain the old, deleted documents.
Share
Improve this answer
Follow
answered Apr 13, 2016 at 14:53
MatsLindhMatsLindh
50.9k44 gold badges5555 silver badges8585 bronze badges
Add a comment
|
|
There was a query that I had regarding the size of Solr data backup. We take Solr backups once a day. We could observed that the size of Solr backup was reduced by 1 GB from that of the previous day, but there had been no deletions or updations made on Solr that day. We checked the number of documents also for both the days. It was more for the backup with lesser size. Is it because of any optimization that Solr is doing internally?
|
Reduction in Solr size
|
1
PostgreSQL is an experimental service and there is not a dashboard and other advanced features (Daily backup for example) that you can find in other services that you mentioned. If you want to do a backup you could write an ad-hoc script that 'saves'\exports all tables as you want and run it every day.
If you need PostegreSQL you can create a PostegreSQL by compose service $17.50 / mo for the first GB and $12 for Extra GB )
Share
Improve this answer
Follow
answered Apr 4, 2016 at 10:22
Crescenzo MigliaccioCrescenzo Migliaccio
1,24977 silver badges1414 bronze badges
Add a comment
|
|
We have setup a community postgresql service on Cloud Foundry (IBM Blumix). This is a free service and no automated backup and recovery is supported out of the box.
Is there a way to set up a standby server or a regular backup in case there is any data corruption/failure?
IBM compose and ElephantSQL can provide this service at a cost, butwe are not ready for it yet.
|
Setting up backup strategy for backing up postgresql database on cloud foundry
|
There are two things that you are wanting to do:
Copy an EBS snapshot to a different region
Make an EBS snapshot available to a different account
These actions can be invoked via the AWS Command-Line Interface (CLI).
Copy an EBS snapshot to a different region
Use the copy-snapshot command to copy the snapshot to a different region:
aws --region us-east-1 ec2 copy-snapshot --source-region us-west-2 --source-snapshot-id snap-1234abcd --description "This is my copied snapshot."
The snapshot will remain associated with the same AWS account.
Make an EBS snapshot available to a different account
Use the modify-snapshot-attribute command to grant access from a different AWS account:
aws ec2 modify-snapshot-attribute --snapshot-id snap-1a2b3c4d --attribute createVolumePermission --operation-type add --user-ids 123456789012
Copying NEW snapshots
You also mentioned copying new snapshots. There is no pre-supplied logic for determining 'new' snapshots, so your script would have to determine which snapshot(s) you would like to copy. Snapshots copied to other regions receive a new snapshot-id, so it isn't easy to match the originals and copies.
|
I want to use the ec2-modify-snapshot-attribute command to automatically copy new snapshots to another account.
What would be the best approach on this? A shell script run by a cron job?
|
Using "ec2-modify-snapshot-attribute" to automatically copy snapshots to another account
|
I don't think regex is necessary here. The only missing part is to check if the file contains old strings before creating a .bak file. So, please try the following approach:
def multipleReplace(text, wordDict):
for key in wordDict.keys(): # the keys are the old strings
text = text.replace(key, wordDict[key])
return text
myDict = #dictionary with keys(old) and values(new)#
home = #some directory#
for dirpath, dirnames, filenames in os.walk(home):
for Filename in filenames:
filename = os.path.join(dirpath, Filename)
if filename.endswith('.txt') or filename.endswith('.xml'):
with open(filename, 'r') as f:
content = f.read() # open and read file content
if any([key in content for key in wordDict.keys()]): # check if old strings are found
with fileinput.FileInput(filename,inplace=True,backup='.bak') as file:
for line in file:
print(multipleReplace(line,myDict), end='')
|
I am using os.walk to walk through a directory searching for certain filetypes. Once a filetype has been found (such as .txt or .xml), I want to use this definition to replace the strings (let's call it old) in the file with the strings from a dictionary (let's call it new).
def multipleReplace(text, wordDict):
for key in wordDict:
text = text.replace(key, wordDict[key])
return text
At first, I had this loop:
myDict = #dictionary with keys(old) and values(new)#
home = #some directory#
for dirpath, dirnames, filenames in os.walk(home):
for Filename in filenames:
filename = os.path.join(dirpath, Filename)
if filename.endswith('.txt') or filename.endswith('.xml'):
with fileinput.FileInput(filename,inplace=True,backup='.bak') as file:
for line in file:
print(multipleReplace(line,myDict),end='')
This worked quickly and would replace the old strings with the new strings in every file that it found the old strings in. However, the problem lies in my script creating a .bak file for every file, regardless of whether or not it even found the old strings in them.
I want to create a .bak file only for the files that contain the old strings (only for files where the replacement was done).
I tried to read all the files and append only those that contained the old strings (using something like new0 that way I could use the FileInput method for those files only,but the regex look up takes forever.
|
FileInput: Make backup files only for files in directory that have been worked on
|
1
Since you mentioned you have encrypted DB, you need to have Oracle Wallet open, if we assume db instance is up, is already open.
I do not think you can/should use "exp" utility. It's replaced by more powerful "expdp" and "impdp" utilities. These two utilities will allow you to successful backup/restore encrypted data.
Please look into Oracle Utilities guide for further command line reference. Generally for expdp you need to use ENCRYPTION and ENCRYPTION_PASSWORD and you may or may not use Oracle Wallet.
My preffered way though, to use RMAN.
Share
Improve this answer
Follow
answered Mar 27, 2016 at 11:14
Dhimant PatelDhimant Patel
3311 silver badge66 bronze badges
1
You mean that if I use expdp utility with ENCRYPTION_PASSWORD option, then I can backup encrypted data? is it right?
– Double J
Mar 27, 2016 at 13:38
Add a comment
|
|
I want to back up my encrypted DB by TDE.
So, I run exp command. but I have an errors. because of encrypted table spaces.
is there any way to back up my DB encrypted by TDE??
I don't have a idea.
plz help me.
|
Back up a oracle DB encrypted by TDE
|
The problem in the code above is that you perform a comparison but you don't update backup variable value in the loop.
It should look like more:
#include<stdio.h>
#include<stdlib.h>
#include<unistd.h>
#include<time.h>
int main(int argc, char *argv[])
{
int b=1;
char backup[100];
char *source=getenv("BackupSource");
char *destination=getenv("BackupDestination");
char *btime=getenv("BackupTime");
time_t getTime;
struct tm *actualTime;
while(b)
{
//in each loop you get the time so it can be compared with the env variable
time(&getTime);
actualTime=localtime(&getTime);
strftime(backup, 100, "%H:%M", actualTime);
//no need for a while loop in a while loop
if(strcmp(backup,btime)==0)
{
system("cp -r $BackupSource $BackupDestination");
}
sleep(60);
}
return 0;
}
|
As the title says, I'm trying to write a program that will backup files from a source directory (set by the user in the shell as an environment variable) to a destination directory (again set by the user in the shell as an environment variable) at a specific backup time (set by the user in the shell as an environment variable - format HH:MM). My code is the following:
#include<stdio.h>
#include<stdlib.h>
#include<unistd.h>
#include<time.h>
int main(int argc, char *argv[])
{
int b=1;
char backup[100];
char *source=getenv("BackupSource");
char *destination=getenv("BackupDestination");
char *btime=getenv("BackupTime");
time_t getTime;
struct tm *actualTime;
time(&getTime);
actualTime=localtime(&getTime);
strftime(backup, 100, "%H:%M", actualTime);
while(b)
{
while(strcmp(backup,btime)!=0)
{
sleep(60);
}
system("cp -r $BackupSource $BackupDestination");
}
return 0;
}
My question is the following : when the environment variable for BackupTime is set my inifinte loop doesn't work. I've inserted print statements at every step in the loop and when the variable for BackupTime isn't set from the shell it always works. When the variable is set the program compiles without any warnings or errors but it does absolutely nothing. I know the strcmp(backup,time) part works because I've printed it separately and when they are both the same it returns 0.
Any ideas on how I could make it work?
|
Running a C program to backup Linux files
|
1
Yes, it works as a standalone server: I configured the Replication RequestHandler and then I'm able to backup cores while they are running.
Share
Improve this answer
Follow
answered Mar 4, 2016 at 13:51
Alessandra Alessandra
48522 gold badges66 silver badges2424 bronze badges
1
How would you configure replication endpoint in case of standalone solr? Can you please provide some reference?
– Hiren
Feb 19, 2018 at 20:53
Add a comment
|
|
I would like to backup my solr 4.8 index periodically by using
curl http://localhost:8983/solr/gettingstarted/replication?command=backup
I don't understand if it is mandatory to build a master-slave architecture or if it sufficient to configure Replication RequestHandler on the Master/stand-alone Server.
|
Solr 4 backup with replication handler
|
1
Use the time driven triggers to run this code:
function backupSheet() {
var file = DriveApp.getFileById(FILE_ID);
var destination = DriveApp.getFolderById(FOLDER_ID); // backups folder
var date = new Date();
var ts = date.toISOString().slice(0,10).replace(/-/g,"");
file.makeCopy(ts+':'+file.getName(), destination);
}
Share
Improve this answer
Follow
answered Feb 18, 2016 at 18:13
Brian PetroBrian Petro
1,5671616 silver badges3232 bronze badges
Add a comment
|
|
I have a critical business spreadsheet. I need to save copies regularly in case I need to see how the spreadsheet looked at a previous time.
I want this to happen automatically using Google Apps Script.
|
How do I backup Google Drive (Google Spreadsheets) file on regular basis using Google Apps Script?
|
1
Without seeing any code, I would imagine that it is trying to stream the output binary file to the server backup location.
The result of this is that every byte that gets wrote needs to be confirmed by the client / server relationship.
When you write it to your local system however, then move it to the server location, you are performing a single transfer, opposed to individual read / write operations for each segment of the file being wrote by the stream.
Its kinda similar to how contiguous file operations are faster on Sata Drives.
If pasting or Copying a 3GB file, you can attain really high speeds.
If pasting 3000 files that are 1kb each, your write speed won't actually be that fast because its treated as 3000 operations vs the single operation that can go at full speed.
Do you know if the other backup programs save the backup locally before moving?
I would imagine that they construct a temp file which is then moved server side.
Share
Improve this answer
Follow
answered Feb 18, 2016 at 14:35
BaaleosBaaleos
1,7531212 silver badges2222 bronze badges
1
Sorry for may bad english I had the same idea about sending in the file segments. Other software running the direct transfer without saving the file locally
– Lo.
Feb 18, 2016 at 14:47
Add a comment
|
|
I am creating a backup software in c# for my organizations.
I have a problem with time to do a backup of my workstation to a shared folder on a server.
If I directly compress the files to the shared folder with temp file created direct to the shared folder, the time to compress is 3 minutes, but if I set the temp dir on the workstation, the compress time is 2 minutes.
I test this job with another backup program and the backup process with temp file created direct to the shared folder is 2 minutes.
What is wrong with dotnetzip?
|
.NET - DotNetZip Backup over the network slow
|
First things first, I was overlooking logcat filters! Simply disabling filters allowed me to see the error message.
The first issue was Rejecting full data backup. user has not seen up to date legal text - this is apparently because old, existing google accounts aren't opted in to the backup service. This is slightly horrifying, as there's no simple way to opt in; how on earth users are going to get access to this I do not know.
To resolve this, remove all Google accounts from the device then turn it off and on again (if you don't restart, it won't allow any added account to act as a backup account!).
Once it's restarted, it should pop up a notification complaining that no account is setup for backups: just add your google account back, enable backups and hopefully you'll finally have it working.
This didn't fix everything for me - I'm still having issues getting it to recognise the gms transport - but the initial issue has been resolved.
|
I'm looking to implement the new autobackup feature introduced in Android M, as detailed in the docs here: http://developer.android.com/training/backup/autosyncapi.html#testing
I'm after easily restoring player database and shared preferences between installs, which this feature purports to enable for Android M. I'm not implementing the android backup service at this time.
The docs claim it's basically enabled by default, no need to write backup management classes and the like, at least for Android M devices - however, I can't get it to work.
abd shell bmgr enabled returns Backup Manager currently enabled
adb shell bmgr run doesn't say anything, but when I then run adb shell bmgr restore com.xyz.abc I get told:
Unable to restore package com.xyz.abc
done
The docs say adb shell setprop log.tag.BackupXmlParserLogging VERBOSE will enable logging, but I can see literally no effect in the terminal or in logcat, and I can't think of another place that it would log to!
My manifest has android:fullBackupContent="@xml/backupscheme" in the application tag, as per the docs, and backupscheme.xml contains
<?xml version="1.0" encoding="utf-8"?>
<full-backup-content>
<include domain="database" path="database.db"/>
<include domain="sharedpref" path="com.xyz.abc_preferences.xml"/>
<exclude domain="external"/>
</full-backup-content>
The database and shareed preferences paths came from checking the decompiled app, and I've tried with this xml stripped back to nothing - nothing changes.
As far as I can tell from the docs, that should be sufficient for it to work, and yet I'm seeing nothing being persisted when I uninstall and reinstall the app.
Am I overlooking something? Are there any assumptions I'm making that I haven't questioned? Why doesn't it just work?! There's so little information out there, I really hope someone else is implementing this and has some guidance here!
|
Implementing Android M's AutoBackup Feature
|
1
This is what worked for me but not ask why.
$strBackupDrive = "D:"
$strBackupComand = 'start backup ' + [char[]]45 + 'include:C:\Temp\ ' + [char[]]45 + 'backupTarget:' + $strBackupDrive + ' -quiet'
$errRun = Start-Process wbadmin -ArgumentList $strBackupComand -wait -NoNewWindow -PassThru
Share
Improve this answer
Follow
answered Jun 27, 2018 at 13:36
Anabela MazurekAnabela Mazurek
1933 bronze badges
Add a comment
|
|
Can you suggest on how to make this script work?
It is working properly in cmd via this command:
wbadmin start systemstatebackup 'backuptarget:"F:"' '-quiet'
It is working in cmd running powershell via this command:
[powershell] wbadmin start systemstatebackup 'backuptarget:"F:"' '-quiet'
But it is not working inside a powershell script (backups.ps1). Im confuse with the use of single and double quotes. The following iteration are not working
backup.ps1
$IFMResult2 = "WBADMIN START SYSTEMSTATEBACKUP -backupTarget:E: -quiet"
$IFMOutput2 = Invoke-Expression $IFMResult2 | out-string -stream
$IFMResult2 = "WBADMIN START SYSTEMSTATEBACKUP 'backuptarget:"F:"' '-quiet'"
$IFMOutput2 = Invoke-Expression $IFMResult2 | out-string -stream
Error is
WBADDMIN Error (Systemstatebackup): the value for option backuptarget is missing
|
Wbadmin in powershell
|
1
Using rsync on a running mongod dbpath may result in corruption or unexpected errors. Please refer to MongoDB Backup Methods for supported backup and restore alternatives.
While using rsync, please consider the following suggestion from MongoDB manual:
If your storage system does not support snapshots, you can copy the files directly using cp, rsync, or a similar tool. Since copying multiple files is not an atomic operation, you must stop all writes to the mongod before copying the files. Otherwise, you will copy the files in an invalid state.
Share
Improve this answer
Follow
answered Mar 22, 2016 at 2:55
Ankur RainaAnkur Raina
1133 bronze badges
1
thank you for the answer. I saw the methods, but non of them offered an incremental solution, and trying to use oplog as an incremental backup resulted in very large backup. the problem by the way was wrong storage engine.
– user3371266
Mar 28, 2016 at 11:46
Add a comment
|
|
I've created a backup process for the mongo dbdir.
The restoration process takes one of the backups created by rsync and copies it to a new disk and mounts it on the data dir.
After the process I still see collections and databases (databases appear as empty) that existed before the restoration process, until I restart mongo.
I would like to avoid this restart if possible,
is there any way to cause mongo reload its data files on the fly?
(I didn't use mongodump because it for some reason inflated the db from 4GB to 40GB after mongorestore, but that is a different issue)
|
Mongo shows new collections and databases after restoration from backup
|
1
You can't change it programmatically, but you can change it by editing the application name here: Developer Console by going to your project -> Enable API and get credentials like keys -> Enabled API(s) -> Drive API -> Drive UI integration. You can enter a description and upload icons as well.
Share
Improve this answer
Follow
edited Jan 24, 2016 at 16:05
answered Jan 24, 2016 at 15:49
01130113
1122 bronze badges
Add a comment
|
|
I have created app folder on Google Drive using Storing Application Data and Google Drive Android API Demos
My app folder name is display as "Text Editor" as shown in below image.
How to change app folder name on Google Drive programmatically ?
|
How to change app folder name on Google Drive Android?
|
1
Doesn't this message say it all?
Operating system error 5(Access is denied.)
This just means that the account being used to run the sqlcmd command does not have access to C:\. If it is a windows account, you can try granting required permissions on that drive.
Also storing in the root of the boot partition is not a good idea. Consider creating a folder, assigning required access privileges and write to that folder
Share
Improve this answer
Follow
edited Jan 19, 2016 at 10:31
answered Jan 19, 2016 at 10:27
RajRaj
10.7k22 gold badges4545 silver badges5252 bronze badges
2
I want to write to my local c drive - to which I already have access. Why is it saying denied?
– ManInMoon
Jan 19, 2016 at 10:29
i am admin. I am running from cygwin terminal which has access write to c drive
– ManInMoon
Jan 19, 2016 at 10:33
Add a comment
|
|
I am using:
$ sqlcmd -S gigbat -d FOD -Q "BACKUP DATABASE [FOD] TO DISK='C:\testDBbak1.bak'"
Msg 3201, Level 16, State 1, Server gigbat, Line 1 Cannot open backup
device 'C:\testDBbak1.bak'. Operating system error 5(Access is
denied.). Msg 3013, Level 16, State 1, Server gigbat, Line 1 BACKUP
DATABASE is terminating abnormally.
I assume it is trying to write to the server's c drive?
How can I specify my local c drive?
|
How do I store my backup locally on an sql server using sqlcmd
|
ejabberd Mnesia backup backups all the data stored in Mnesia, so if your archive are in Mnesia they will be backuped as well.
However, like always with backup, you must test the process from backup to restore to validate that it works as expected and matches your needs.
|
Does making an ejabberd binary backup of mnesia database from the admin panel will also back up archived messages stored in MUC archive and private chats archive? If not, how to back up archived messages?
|
ejabberd Mnesia database backup
|
Unfortunately, parse doesn't offer an automated way to backup your data, because if your app is too large to backup with queries via the REST API, it's also likely going to take a long time to export from their side and consume a good chunk of resources. Allowing all apps to perform an export in this way, automatically, on a schedule, would have a significant effect on performance for the entire platform although they do allow you to export your data, but it takes time also to do this depending on your data size and they generally email it to you.
|
I recently had an issue with the free tier of Heroku Redis where our database got wiped due to "an incident" and there was no back-up of our data.
I'm about to start using the free tier of Parse and was wondering if their free tier operates in a similar way?
Thanks in advance
J
|
Parse: data backed up on free tier?
|
If you could quantify "few" and "some" in "A few months ago, I lost some files" (where "few" would be considered to be replaced with "every few" in order to get a rate), then you could calculate the probability of a false positive. However just from those words, I would say, yes, a 32-bit CRC should be fine for your application.
As for speed, if you have a recent Intel processor, you likely have a CRC-32C instruction, which can make the calculation much faster, by about a factor of 15. (See this answer for some code.) That could be made faster still by running it over multiple cores. If done right, you should be limited by the I/O, not the calculation.
There is no advantage in this case to bundling them in a zip or rar. In fact it may be worse, if a corruption of that one file causes you to lose everything.
|
I have 3 terabytes, more than 300,000 reference files of all sizes (20, 30, 40, 200 megas each) and I usually back them up regularly (not zipped). A few months ago, I lost some files probably due to data degradation (as I did "backup" of damaged files without notice).
I do not care about security, so do not need MD5, SHA, etc. I just want to be assured that the files I'm copying are good (the same bits and bytes) and verify that backups are intact after a few months before making backups again.
Therefore, my needs are basic because the files are not very important and there is no need for security (no sensitive information).
My doubt: the format/method "SFV CRC/32" is good and fast for my needs? There is something better and faster than that? I'm using the program ExactFile.
Are there any checksum faster than SFV/CRC32 but that is not flawed? I tried using the MD5 but it is slow and since I do not need data security, I preferred the SFV/CRC32. Still, it's painful, because there are more than 300,000 files and takes hours to make the checksum of all of them, even with CPU xeon 8 cores HT and fast HDD.
From the point of view of data integrity , there is some advantage in joining all the files in one .ZIP or .RAR instead of letting them " loose " in folders and files?
Some tips?
Thanks!
|
SFV/CRC32 checksum good and fast enough to check for common backup files?
|
1
options -b and --backup-dir=/path/to/dir and rsync puts the files in the backup-dir. And you can then do whatever you want with them !
Share
Improve this answer
Follow
answered Dec 6, 2015 at 11:10
user358360user358360
19911 gold badge22 silver badges1111 bronze badges
Add a comment
|
|
the output of
rsync -avzn --delete lists the files to be deleted . I mount the files system with samba and then i can get a list of the files to be deleted with
| grep deleting
eg (its windows so there a space in the filenames)
deleting janes/pass the parcel.jpg
deleting janes/Noname.jpg
deleting janes/111EUVAT.jpg
I'd like to copy them somewhere 'just in case' SWMBO realises that she has made a mistake. I can list just the files to be deleted with
| grep deleting
what do I do next to copy them somewhere . Something with xargs ????
Thanks
|
How to copy files to be deleted with rsync ?
|
If you want to run cron as a main purpose of the container than fine, look at some older questions:
How do I start cron on docker ubuntu base?
Cron containers for docker - how do they actually work?
If you want to run it as side task (as cron usually run), I would reconsider going with first option :)
|
I want to implement backup tasks for my docker containers using crontab
Question :
Is it a nice way to implement backup task of a docker containers ?
How do you add a crontab ? Dockerfile ?
|
Adding a backup crontab into a docker container
|
Issue the Command: # tar cvzf backup.tar.gz /var/www
Where:
c - create backup
v - verbose output
z - compress in gzip
f - backup file name
The backup will be created in your current working directory.
Use ls command to list it
|
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
i have a dedicated server running centos. i want to know, how to take a backup of my files in [ var/www ] , file size is very big, it can be around 100GB+ , so i want to compress it using zip, or tar.gz anything will work.
can anyone please provide the centos command line for it, how to download all files from var/www by compressing to make the size smaller.
|
How to compress file and take a backup in centos [closed]
|
1
V2 VM backup in Resource Group
Its available now
check with this link
https://azure.microsoft.com/en-us/documentation/articles/backup-azure-vms-first-look-arm/
Share
Improve this answer
Follow
answered Apr 4, 2016 at 9:58
Viresh MathapatiViresh Mathapati
4999 bronze badges
1
1
This is link only answer.
– ketan
Apr 4, 2016 at 10:18
Add a comment
|
|
I have 16 VMs running Win Sever 2012. Some were created with ARM templates, some manually in the new portal.
I need to now get them all discovered by Azure Backup so they cab be captured at the "VM level". These new
VMs do not show up in the classic portal and do not show as "discovered".
Does scripting exist that can force the discovery so I can make VM-level backups of these resources?
Thanks.
|
Azure VM Backup of v2 Hosts?
|
@echo off
setlocal
set "src=C:\Users\MyName\Photos"
set "dest=E:\ExtBackup\2015-photo-backup\"
rem List every folders (/ad) that start with 2015-10 by using *
for /f "delims=" %%a in ('dir /b /ad "%src%\2015-10*"') do (
rem copy each folder to destination
echo robocopy "%%~a" "%dest%"
)
Do not forget to remove echo if the tests are OK.
Note: if you need to copy subdir, move or anything else related to robocopy see this page
|
I would like to use robocopy to copy multiple directories based on a similarity in the folder names' first few characters. How do I pick out certain directory names (perhaps with regular expressions?) and a loop of some sort so that I can avoid this horrible redundancy in copying the directories and their contents? The programmer in me dies a little bit each time I copy and paste these 3 lines and modify the folder names manually.
set "src=C:\Users\MyName\Photos\2015-10-25"
set "dest=E:\ExtBackup\2015-photo-backup\2015-10-25"
robocopy "%src%" "%dest%"
set "src=C:\Users\MyName\Photos\2015-10-13"
set "dest=E:\ExtBackup\2015-photo-backup\2015-10-13"
robocopy "%src%" "%dest%"
set "src=C:\Users\MyName\Photos\2015-10-02"
set "dest=E:\ExtBackup\2015-photo-backup\2015-10-02"
robocopy "%src%" "%dest%"
Rules
I'm not copying all the directories in Photos, so a way to pick out the directory name is needed
The source directory name must be copied too. That's why I'm repeating the source dir name in the destination
must use robocopy
I wish to learn batch and avoid redundant scripting
|
In Windows batch, how do I copy multiple folders selected by a naming criteria with robocopy?
|
1
All options have to come before the database and table names. Try:
mysqldump -h localhost -u root -p --skip-add-locks--where="creation_date <= '2015-09-31'" colossal_db users | gzip > users.sql.gz
Share
Improve this answer
Follow
answered Oct 22, 2015 at 17:14
BarmarBarmar
760k5353 gold badges517517 silver badges629629 bronze badges
3
But, it did work! I had 30 million records. And only 1000 of them were in the dump file post the where clause date. How could that have happened?
– MontyPython
Oct 22, 2015 at 17:22
1
Sorry, misread the question. Were those extra rows for October 1? Sounds like a timezone issue.
– Barmar
Oct 22, 2015 at 17:25
Didn't check the date, I should have checked! I'll check the next time I do that and update here. I deleted all records where creation_date > '2015-09-31'.
– MontyPython
Oct 22, 2015 at 17:30
Add a comment
|
|
I was doing a through backup-and-restore procedure and it was needed that I use the --where option in mysqldump to fetch only the data before October 2015 since inception. This was the command that I executed.
mysqldump -h localhost -u root -p --skip-add-locks colossal_db users --where="creation_date <= '2015-09-31'" | gzip > users.sql.gz
When I restored the dump, I found out that the table consisting the restored data contained data for October 2015 also. Why does it happen when I have put a where clause?
|
Why does --where option in mysqldump not work sometimes?
|
You need to give the full qualified path in cron scripts, ~ is not expanded to the home directory.
However the way you quote looks funky. You can call hg and directly specify the path to the repository:
hg -R /full/path/to/repository push URL
Thus
*/60 * * * * hg -R /full/path/to/repository push URL
might do the trick for you.
|
I want to periodically backup a Mercurial repository to the bitbucket clone. One option is to schedule it with cron. But fail to see how to 'add' and then 'push' from the cron configuration file (how to execute 'hg' in the local directory?).
A line like this in the crontab
*/60 * * * * ~/path/to/repository/hg push https://[email protected]/user/repository
does not work.
|
How to backup a Mercurial repository?
|
Use Distcp to transfer the HDFS data to other cluster or any cloud inorder to store the data.
If you want to schedule the Backup process you may avail OOZIE-DISTCP for backup process!!
|
I am building a new Hadoop cluster (expanding number of nodes and extending capacity of current nodes) and need to back up all of the existing data. Right now I am just tar-ing everything and sending it to another server.
Is there a smarter way of doing this which will allow me to easily deploy once the new cluster is set up?
Edit: I should also point out that I don't store any data on the cluster. I bring data to the cluster, process it, and then send the processed data back to the original server. Any temporary data on the cluster is the deleted.
|
Backup Hadoop in order to install new cluster, best practice
|
1
You can use "dd" for each partition (need to get their start blocks and active sizes, you may use fdisk for that). Also you need to use "dd" to get boot sector. Then you can create partition table with 4 partitions on second SD card and copy 4 partition images there using "dd" and information on start blocks and sizes for this SD. Also you need to "dd" boot sector.
Share
Improve this answer
Follow
answered Oct 11, 2015 at 17:35
Vladimir VaschenkoVladimir Vaschenko
51833 silver badges2020 bronze badges
Add a comment
|
|
I have a 32GB drive (SD card) with 4 partitions. Total partitioned space is <2GB.
I need to make an *.img file so that I can clone it to other SD cards which are smaller than 32GB.
If I just use "dd" I get an image file that is the full size of the card - 32GB.
This is all under Linux and the SD card is bootable, so can't just copy files.
Any suggestions?
|
Drive image only partitioned space
|
Actually tar does not erase data as a default. But any files that are contained within the tar archive will overwrite files of the same name if they are already present. Likewise a sub-directory's contents will not be overwritten if the tar archive does not contain files matching them.
mkdir -p foo/bar/
touch foo/file1 foo/bar/file1
tar -cf foo.tar foo/
rm -rf foo
mkdir -p foo/bar/
touch foo/file2 foo/bar/file2
tar -xf foo.tar
ls foo foo/bar/
As once can see both file1 and file2 are present and the newly unarchived directory did not overwrite the old. Here is the output of ls from my system:
foo:
bar file1 file2
foo/bar/:
file1 file2
|
I made several backups on different directories with Backup Manager. Eg: /home/user1 /home/user2...
It gives me some tar files. The content of a tar file looks like :
home/user1/
home/user1/.profile
home/user1/.bash_history
home/user1/.bash_logout
...
I tried to test the restoration with something like :
tar -xvzf home.user1.tar.gz -C home/user1
But the command above recreate all the structure inside the choosen directory. That gives /home/user1/home/user1/filname1.
So I guess I should use the command specifying the home directory (/home) instead of the user directory. But is there any risk to erase other user's directories in /home ?
Thks for your time.
|
Can tar extraction erase brother directory ?
|
1
It's depend on the situation on which path you are trying to restore the backup. If you restore the 10/1 backup at the same path where your secnd copy 10/5 is present, then it will delete that same copy
Share
Improve this answer
Follow
answered Oct 6, 2015 at 14:27
24x7servermanagement24x7servermanagement
2,53011 gold badge1414 silver badges1111 bronze badges
Add a comment
|
|
I plan on restoring from a cPanel backup using the cPanel Backup Wizard, but I would like to know if restoring from a backup in this way will delete any other cPanel backups that were made at a future date from the one I’m restoring from?
Specifically, I have backups generated on 10/1 and 10/5 (earlier today). If I restore from the backup on 10/1, will it delete the backup I created on 10/5?
Additional Details
I'm on a Bluehost VPS server running CentOS 6.7.
|
cPanel - Does restoring from a cPanel backup delete other cPanel backups made at a future date than the one you’re restoring from?
|
1
Your command would be fine, but you need to run as root user on the remote end (only root has permission to set file owners):
rsync -az -H /directorySource/ [email protected]:/home/myUser/myBackupDirectory
You also need to ensure that you use rsync's -o option to preserve owners, and -g to preserve groups, but as these are implied by -a your command is OK. I removed -p because that's also implied by -a.
You'll also need root access, on the local end, to do the reverse transfer (if you want to restore your files).
If that doesn't work for you (no root access), then you might consider doing this using tar. A proper archive is probably the correct tool for the job, and will contain all the correct user data. Again, root access will be needed to write that back to the file-system.
Share
Improve this answer
Follow
edited Sep 17, 2015 at 15:14
answered Sep 17, 2015 at 15:04
amsams
25.2k44 gold badges5555 silver badges7676 bronze badges
Add a comment
|
|
I recently configured a little server for test some services, now, before an upgrade or install new software, I want to make an exact copy of my files, with owners, groups and permissions, also the symlinks.
I tried with rsync to keep the owner and group but in the machine who receives the copy I lost them.
rsync -azp -H /directorySource/ [email protected]:/home/myUser/myBackupDirectory
My intention is to do it with the / folder, to keep all my configurations just in case, I have 3 services who have it's own users and maybe makes modifications in folders outside it's home.
In the destination folder appear with my destination user, whether I do the copy from the server as if I do it from the destination, it doesn't keep the users and groups!, I create the same user, tried with sudo, even a friend tried with 777 folder :)
cp theoretically serves the same but doesn't work over ssh, anyway I tried to do it in the server but have many errors. As I remembered the command tar also keep the permissions and owners but have errors because the server it's working and it isn't so fast the process to restore. I remember too the magic dd command, but I made a big partition. Rsync looked the best option to do it, and to keep synchronized the backup. I saw rsync in the new version work well with owners but I have the package upgraded.
Anybody have some idea how I do this, or how is the normal process to keep my own server well backuped, to restore just making the partition again?
The services are taiga, a project manager platform, a git repository, a code reviewer, and so on, all are working well with nginx over Ubuntu Server. I haven't looked other backup methods because I thought rsync with a cron job do the work.
|
make server backup, and keep owner with rsync
|
1
AWS Load balancer doesn't exactly work that way. It distributes the load among all the healthy registered instances.
To maintain high-availability I'd recommend using AWS auto-scaling feature.
Basically, you put your machines behind a load balancer and if any of them starts failing it triggers an event and takes action accordingly. You can start with 2 machines and set the auto-scaling to keep 2 machines running at all times. So if one goes down it just launches another one of the same kind and adds it to the load balancer. You can start with one machine as well but if it goes down there will be some downtime until the new one kicks in. Also you can increase/decrease the number of running instances depending on the traffic you're getting.
Hope this helps to give you an idea.
Share
Improve this answer
Follow
answered Sep 10, 2015 at 10:04
Volkan PaksoyVolkan Paksoy
6,84755 gold badges3030 silver badges4040 bronze badges
1
Thanks for suggestion, I will check it out!
– Minh Ha Pham
Sep 10, 2015 at 10:19
Add a comment
|
|
I am using AWS EC2 instances for my API server. I want to prevent the server down situation so I plan to use 2 servers (one production and one backup)
I want to config AWS network or add a loadbalancer which could:
Normally, all requests go to production server.
If production server down, all requests go to backup server.
I don't know If Is there any feature on AWS could help me to do my plan.
|
AWS network, loadbalancer for one production and one backup server
|
1
I've found it.
It is in located in the personal directory :
~/.config/radicale/collections/contact/AddressBook.vcf
In ~/.config/radicale/collections/contact you there are as well the calendars.
Hum. This seems to me to be (remotly) a programing question, since its answer is program for who want to program its own bash backup script.
Share
Improve this answer
Follow
answered Sep 17, 2015 at 14:16
GomsqB0TGomsqB0T
9599 bronze badges
Add a comment
|
|
Now that I am runing Radicale on my own Linux server (to manage calendars and contacts), I am trying to figure out how to backup Addressbooks via a bash script (which I could then cron or manually launch).
The exporting part is not going to be so difficult thanks to Duplicity.
But where the ... is located the Addressbook ?
There is no *.vcf related to Radicale anywhere on my system.
|
CalDAV/CardDAV Radicale backup
|
1
Yes, it will let you do that. There are considerations though, regarding full and precise restoration of the data should a restore operation become necessary.
Best you read up on the whole thing so you can choose the best back-up method for your situation.
Share
Improve this answer
Follow
answered Sep 9, 2015 at 14:38
Juan-CarlosJuan-Carlos
37722 silver badges88 bronze badges
Add a comment
|
|
Can I use the "backup" transact sql command (sql-server 2008)
when my database is used (read/write) by other users.
Or I must switch to single_user mode before doing this?
|
it is possible to backup sql server database in runtime
|
Robocopy is your friend; google it and have a look. It's a very powerful program by sysinternals (owned by Microsoft). Alternatively, if you want a nice GUI to go with it use SyncToy (https://www.microsoft.com/en-gb/download/details.aspx?id=15155) which can be run with a batch file (including arguments) or made into a scheduled task.
|
I'm a graphic designer and I use my external hd frequently to move psd/jpeg/png/tiff files from my laptop at home to my Work PC and vice versa. But as a precaution before modifying or adding a file I copy it to a folder. It's possible to make it automatically? And it's easy to make on Batch?
Thanks for help!
|
It's possible to make a Batch program that backup a folder files from my external hd to pc hdd, anytime I add or change a file?
|
This has been fixed in ADB Build Tools 23.0.1.
|
I'm trying to view files saved to the internal sdcard on my Android device in order to debug my application.
I used to be able to use adb backup to this this. I just upgraded to Build Tools v23, and now when I try to run adb backup from Terminal in OSX, I get a 'Bus error: 10' response and no backup.
Any ideas?
|
adb backup with build tools v23 fails
|
1
You've defined a list
#The backup must be stored in a target directory
target_dir = ['"E:\\Backup"']
while your usage indicates you'd intended to use a str there:
#The backup must be stored in a target directory
target_dir = '"E:\\Backup"'
Share
Improve this answer
Follow
edited Aug 27, 2015 at 10:55
answered Aug 27, 2015 at 10:52
tynntynn
39.3k88 gold badges115115 silver badges147147 bronze badges
0
Add a comment
|
|
I'm trying to write a code to create backup of files or directory using Python but there is an error : can only concenate list not "str"
Here's my code :
import os
import time
# The files or directory which has to be backed up
source = ['"F:\\College Stuffs"']
#The backup must be stored in a target directory
target_dir = ['"E:\\Backup"']
#File will be backed up in a zip file and name will be set to current date
target = target_dir + os.sep + time.strftime('%Y%m%d%H%M%S') + target.append('.zip')
# Create the directory if it's not present
if not os.path.exists(target_dir):
os.mkdir(target_dir) #Make the directory
#Use zip command to put files in a zip archive
zip_command = "zip -r {} {}".format(target,''.join(source))
#run the backuo
print "Zip command is : "
print zip_command
print "Running:"
if os.system(zip_command) == 0:
print 'Successful backup to', target
else:
print 'Backup FAILED'
|
Creating a backup and getting error only concenate list and not (str)
|
1
I wanted to answer my own question because of the length of the code.
I managed to get things working except for excluding directories i wanted to be excluded...
Here i am getting source and destination from user input:
SET /p source="Hostname : "
SET /p dest="Destination path : "
SET /p _OS="a for XP b for W7 backup : "
echo "you chose %_OS%"
IF condition to get correct Windows7 folders path based on previous choice:
IF %_OS%==b (
SET source=%source%\c$\Users
pause
echo !source!
echo %dest%
pause
FOR condition to loop through User folders:
FOR /D %%G in (!source!\*) DO (
if exist %%G\Desktop ROBOCOPY %%G\Desktop %dest%\%%G\Desktop /XD !source!\admin /e /s /copy:datsou
)
PAUSE
when the code is executed !source!\admin gets the correct path to be excluded, but robocopy just copies it anyway.
I also tried to specify the path : "\computer1\c$\Users\admin" with or without quotes, same effect, the path is not exlcuded.
Any ideas???
Share
Improve this answer
Follow
answered Sep 1, 2015 at 9:06
user5236878user5236878
2122 bronze badges
1
I am all alone it seems, I will keep writing, maybe this can be helpfull to someone. ROBOCOPY cannot exclude folders he is in, if the FOR index is pointing to the directory I want to exclude, robocopy cannot exclude it. i.e current path C:\Users\Admin\Desktop ---> /XD \Admin
– user5236878
Sep 3, 2015 at 7:06
Add a comment
|
|
This is my first question on StackOverflow, so please be kind ;); I have several computers to backup at different times, each computer has from 2 to 30 users, I want to backup Desktop, Documents and Favorites folders of a specific computer in the network.
Originally, I tried to use XCOPY, but due to the length of folder paths it could not be done, so I used ROBOCOPY instead, but I'm stuck. Here is what I have:
SET source=c:\testA\Users
SET dest=c:\testB
rem Desktop folder backup
for /D %%G in (%source%\*) DO (
if exist "%%G\Desktop" ROBOCOPY /e /s /MIR /copyall "%%G\Desktop" "%dest%\%%G\Desktop" )
This command can't create the destination folder %dest%\%%G\Desktop at run time this is like : c:\testb\c:\testA\Users\"current username from for index %%G"\Desktop.
It gives me error on destination folder: "syntax of file name, dir name or volume name is incorrect."
Theoretically the command itself works, apart from dest folder, but maybe I am missing something. Any ideas?
|
Use Robocopy to backup specific users folders
|
This may not be the full, but I think you will get the point:
DECLARE @databaseName nvarchar(100)
DECLARE @fileName nvarchar(100)
DECLARE @serverEdition int;
DECLARE @useCompression bit;
SELECT @serverEdition = Cast(SERVERPROPERTY('EditionID') as int);
-- Reference: http://stackoverflow.com/questions/2070396/how-can-i-tell-what-edition-of-sql-server-runs-on-the-machine
IF @serverEdition IN (
1804890536, -- Enterprise
610778273, -- Enterprise Eval
-1534726760 -- Standard
)
BEGIN
useCompression = 1; -- Supports compression
END
if @useCompression
BEGIN
BACKUP database @databaseName to @fileName WITH COMPRESSION;
END
ELSE
BEGIN
BACKUP database @databaseName to @fileName;
END
|
I currently use a pretty basic backup script to backup my SQL databases to a given directory, zipped with Winrar.
I am looking to use the SQL compression command (currently commented out) prior to the Winrar IF the version of SQL the script is being used on is SQL Standard or higher.
Here is what my current script looks like:
Declare @backupPath nvarchar(1000);
set @backupPath = 'C:\Backups\Auto\';
Declare @fileName nvarchar(100);
Declare @currentDate datetime
Declare @fullPath nvarchar(1000);
Declare @databaseName nvarchar(100);
set @databaseName = 'Database_name';
-- Do not change these values
set @currentDate = GETDATE();
set @fileName = @databaseName + '_' + REPLACE(REPLACE(REPLACE((CONVERT(nvarchar(24), GETDATE(), 120)), ':', ''),' ', ''),'-', '') + '.bak'
set @fullPath = @backupPath + @fileName;
print 'adding device ' + @fileName
EXEC sp_addumpdevice 'disk', @fileName, @fullPath;
BACKUP database @databaseName to @fileName --WITH COMPRESSION
print 'dropping device ' + @fileName
EXEC sp_dropdevice @fileName
I would like the script to check for version/edition, then if the Version/Edition is Standard or higher, to run the WITH COMPRESSION command.
|
SQL Script to compress database backups if version allows it
|
/**
* Get directory path where backups stored
*
* @return string
*/
public function getBackupsDir()
{
return Mage::getBaseDir('var') . DS . 'backups';
}
is the function in class Mage_Backup_Helper_Data.
You need to log value of whats returned by this function by changing code to
public function getBackupsDir()
{
$result= Mage::getBaseDir('var') . DS . 'backups';
Mage::log($result);
return $result;
}
|
i've this strange behaviour on my magento installation: every time i try to launch a backup, the file is saved into var folder instead of var/backups folder, so that is not visible into backups list (whose looking for var/backups folder).
Any suggestion? Nothing has been changed since yesterday.
Thanks
|
Magento backup folder
|
The web interface is not working (still now). But you can use "gsutil". It is a command line tool to access the cloud storage. For this you have to download and install the Google Cloud SDK. Perform gcloud init and gcloud auth login. Select your project and login into the cloud platform.
Now from command line you can use these commands-
gsutil rm gs://<bucket-url>/<file-name> .........to delete a file
gsutil rm -r gs://<bucket-url>...................to delete a bucket
|
I want to delete old backup in Google Cloud Storage.
The 'delete' button is not enabled after items are selected
Why 'delete' is disabled?
|
How to selectively delete Google App Engine data backup set in google cloud storage via the admin console?
|
There's no such option (yet).
But you can do it to "manually" by recursing the directory structure, downloading the files one by one, handling the errors as you like.
There's an example implementation available in C# and Powershell:
Recursively download directory tree with custom error handling.
|
I'm trying to download a complete folder via WinSCP. However there can be files that I do not have permission to download in them.
/www/
/www/file1 <-- No permission
/www/file2 <-- Permission
/www/ ..
/www/file999
/www/folder1/
/www/folder1/file28328
/www/folder1/file342423 <-- No permission
etc...
There's a few thousand files, so I don't really want to blacklist them. I'm downloading them using the following command:
using(var session = new Session())
{
session.Open(options);
session.GetFiles("/www", "C:/backup");
}
This then fails on file1, and does not continue. Is there a way (preferably an option) where I can just skip these files? I just want it to download everything it can.
|
WinSCP .NET assembly Skip failures
|
I use Duplicity for backing up my hosting account to remote server using WebDav. Schedule is daily incremental, monthly full.
I want to also protect backups against hosting hack, so I have to be sure that server (where is Duplicity) can not destrol backups on remote server.
That is not what duplicity is designed for. Its key feature is encryption to protect your backups on possibly insecure backends.
If you machine is hacked, your main problem is probably not backup destruction but silently backing up malicious code uploaded by the attacker.
Is there recommended solution for protecting backups?
Not to my knowledge. A second repository where you rsync to using --link-dest or dirvish to achieve a snapshot style backup of your backups. This way an attacker could modify/corrupt your old backups but you'd still have the proper files. But the issue then would still be to find out from which point in time your backups start to be soiled.
If not I thinked up about to make script on remote server, what will make backups read-only after they were uploaded. (And eventually it also delete backups older than x months.)
should work as long as the last duplicity run was successful. the only time duplicity overwrites something on the backend is when it resumes an interrupted backup.
I can make this script, but I am not sure what files can be protected safely. If I chmod o-w all files periodically, will backups continue next day? Or Duplicity needs to write to yet uploaded files? How to determine what files will Duplicity need to change and what not?
See my previous answer.
How can I delete old backups and not break something?
Use duplicity's purge commands. You could run it on your webdav machine as a user that still has write access to the repo.
Have fun.. ede/duply.net
|
I use Duplicity for backing up my hosting account to remote server using WebDav. Schedule is daily incremental, monthly full.
I want to also protect backups against hosting hack, so I have to be sure that server (where is Duplicity) can not destrol backups on remote server.
Is there recommended solution for protecting backups?
If not I thinked up about to make script on remote server, what will make backups read-only after they were uploaded. (And eventually it also delete backups older than x months.)
I can make this script, but I am not sure what files can be protected safely. If I chmod o-w all files periodically, will backups continue next day? Or Duplicity needs to write to yet uploaded files? How to determine what files will Duplicity need to change and what not?
How can I delete old backups and not break something?
|
How to protect Duplicity backups
|
PoSh is just a wrapper here. For learning what SMO classes and methods are available, you need to look at the SMO documentation. Start with SQL Server Management Objects (SMO) Programming Guide.
For a list of all classes in SMO, again I reffer you to the product documentation, please look at SQL Server Management Objects Reference (the list on the left hand side has all the SMO namespaces, click on each namespace to see all classes available in each namespace).
|
I am trying to learn powershell, recently i was struck in the query that how can find out the objects in the assembly's
example:
I have loaded the Powershell sql assembly
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.sqlserver.smo") | Out-Null
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.sqlserver.smoextended") | Out-Null
For denoting the server, I need to add New-Object ("Microsoft.sqlserver.Management.SMO.Server")
For backup database New-Object ("microsoft.sqlserver.management.smo.backup")
My question is how can I get all the objects list and to use in script.
|
Powershell SMO objects
|
Yes, this is absolutely possible! Do the following from the server you're backing up:
tar czv <stuff to backup> | ssh [email protected] 'cat > /home/user/backupfolder/backup.tar.gz'
This instructs tar to output the archive to stdout, which is piped and sent over ssh to be saved to a remote file.
|
I have a server that I would like to make a tar backup of, but the server itself doesn't have enough disk space that is equal to the data it contains. Therefore, I would like to tar it directly to an ssh directory, such that it would dump the tar data into the ssh target without taking huge temporary disk space from the source server.
The server is supposed to do an ssh connection and use a directory of the form:
ssh [email protected]:/home/user/backupfolder/
Is this possible with simple Linux terminal piping or even a simpler way?
|
Tar and save results directly to an SSH directory
|
1
My suggestion would be to:
Copy files to www
Use phpMyAdmin or similar tool to create empty database and import dump B&M created.
Change /sites/default/settings.php to connect to your new database.
Login to back-end and check on Configuration -> Media -> File system are all necessary paths writable to drupal. Just click "Save configuration" and if none of the fields get red border then it's ok. If some does then you have to change dir access flags (owner, what ever) to make those dirs writable.
Share
Improve this answer
Follow
answered May 5, 2015 at 7:17
MilanGMilanG
7,05422 gold badges3636 silver badges6565 bronze badges
1
thank you for your response,I used to do it manually like you said, but I want a solution for non professional user admin. Any way I find the issue cause : the setting memory_limit = 128M should be also in wamp/bin/php/php.ini, I used to make any modification to the file opened by wamp interface php->php.ini and this is located under wamp/bin/apache, sorry I am newbie and Idk where exactly to made such modif
– Oumaya
May 5, 2015 at 9:01
Add a comment
|
|
I'am trying to restore a backup site from a copy made by Backup and Migrate module in Drupal, I followed those steps:
copy the drupal files to www directory
create a new mySQL database
install drupal using the new db
enable backup and migrate module
performed the restore
I faced this error :
Notice: Undefined index: files in theme_backup_migrate_file_list() (line 954 of .../sites/default/modules/backup_migrate/backup_migrate.module)
the issue was solved by the dev version of "Backup and Migrate" module.
after creating a backup with the dev version and restore it with the same version (dev) in the host machine I get nothing as result, nothing was changed my site still virgin and nothing in admin/reports/dblog.
There are possible missing setting that be probable issue cause:
DB setting : I have the same setting (db name , table prefix ...)
php.ini memory_limit = 128M and my backup file is 126M
Appache module mod_rewrite is enabled
Any Helpful hint please. I don't know what I am missing here.
|
Restore a backup site with Drupal
|
i do not change your monitor script just change send mail and copy with copy-item powershell command
$folder = 'c:\sites' # Enter the root path you want to monitor.
$filter = '*.*' # You can enter a wildcard filter here.
# In the following line, you can change 'IncludeSubdirectories to $true if required.
$fsw = New-Object IO.FileSystemWatcher $folder, $filter -Property @{IncludeSubdirectories = $false;NotifyFilter = [IO.NotifyFilters]'FileName, LastWrite'}
Register-ObjectEvent $fsw Changed -SourceIdentifier FileChanged -Action {
$name = $Event.SourceEventArgs.Name
$changeType = $Event.SourceEventArgs.ChangeType
$timeStamp = $Event.TimeGenerated
Write-Host "The file '$name' was $changeType at $timeStamp" -fore white
Out-File -FilePath c:\sites\filechange\outlog.txt -Append -InputObject "The file '$name' was $changeType at $timeStamp"
$username=”gmailaccount”
$password=”password”
$smtpServer = “smtp.gmail.com”
$msg = new-object Net.Mail.MailMessage
$smtp = New-Object Net.Mail.SmtpClient($SmtpServer, 587)
$smtp.EnableSsl = $true
$smtp.Credentials = New-Object System.Net.NetworkCredential( $username, $password )
$msg.From = "gmail"
$msg.To.Add(“mail should check notify”)
$msg.Body=”Please See archive for notification”
$msg.Subject = “backup information”
$files=Get-ChildItem “c:\sites\filechange\”
Foreach($file in $files)
{
Write-Host “Attaching File :- ” $file
$attachment = New-Object System.Net.Mail.Attachment –ArgumentList S:\sites\filechange\$file
$msg.Attachments.Add($attachment)
}
$smtp.Send($msg)
$attachment.Dispose();
$msg.Dispose();
Copy-Item c:\sites\$name C:\a\$name }
i check this script work for me if change file content of file first email log file then copy them to destination c:\a\ also you and that file changed to attachment of mail
|
I've put together this script to detect file changes in a directory, so that whenever the changes take effect the file(s) changed will get backed up right away.
I have also set up an email notification.
The backup works. I can see whenever a file changes it gets copied over to the desired destination, however I am receiving three emails and the robocopy log shows no changes, which leads me to think it's being written three times each time a file changes. So the last time it gets written there will of course be no changes.
Below you can see the code, hope you can help me figure out what's going on.
#The Script
$folder = 'C:\_Using Last Template Approach\' # Enter the root path you want to monitor.
$filter = '' # You can enter a wildcard filter here.
# In the following line, you can change 'IncludeSubdirectories to $false if required.
$fsw = New-Object IO.FileSystemWatcher $folder -Property @{IncludeSubdirectories = $true;NotifyFilter = [IO.NotifyFilters]'FileName, LastWrite'}
Register-ObjectEvent $fsw Changed -SourceIdentifier AutoBackUp -Action {
$path = $Event.SourceEventArgs.FullPath
$changeType = $Event.SourceEventArgs.ChangeType
$timeStamp = $Event.TimeGenerated
$datestamp = get-date -uformat "%Y-%m-%d@%H-%M-%S"
$Computer = get-content env:computername
$Body = "Documents Folders have been backed up"
robocopy "C:\_Using Last Template Approach" G:\BackUp\ /v /mir /xo /log:"c:\RobocopyLog.txt"
Send-MailMessage -To "[email protected]" -From "[email protected]" -Subject $Body -SmtpServer "smtp-mm.me.com" -Body " "
# To stop the monitoring, run the following commands (e.g using PowerShell ISE:
# Unregister-Event AutoBackUp
}
|
Powershell + Robocopy Auto backup executing multiple times
|
In the end, I found a working solution.
First, I used 2 separate expect scripts.
Telnet into the server, delete old backups, use mysqldump to extract all tables to a flat file via mysqldump -u db_owner -p --all-databases > output.sql, and create a massive tarball of everything. Logout.
Use SCP to pull the newly created tarball, extract it to a local SVN controlled working copy folder.
Use a second expect script to login to the server and delete the backup. Logout.
From there, I just manually svn add and svn commit as needed.
|
Is there a simple way to do an automated backup of an entire website on a host like GoDaddy via the command-line?
So far, I know I need to backup all the files in my home directory recursively. I could possibly automated SFTP to connect and issue a get -R * command to get the full file dump, or just use SCP.
The other half of the puzzle is getting all of the tables available, mostly WordPress tables. My guess is that maybe there's a command-line command I could issue which dumps the database contents to a flat file, which I could then also pull via SFTP. If such a command exists, my plan is to use a combination of Telnet and EXPECT scripts to login to the GoDaddy site, issue some commands, then disconnect back to my local shell.
The end result should be that I have a folder with all of my server content in it, plus the flat file backup of the SQL database from the server. I know there are WordPress backup plugins, but they tend to provide a slew of ZIP files, when all I want is the raw data directly so I can put it in my private SVN server for backup and versioning.
So my question: how do I extract all of the databases on my GoDaddy server via the command-line to a file?
Thank you.
|
Full backup of GoDaddy site via command-line script
|
Here's some code that snapshots ALL EBS volumes, then only keeps the latest 2 snapshots. You could also modify it to only snapshot volumes with a particular tag. Substitute your own Region as appropriate.
#!/usr/bin/env python
import boto.ec2, os
MAX_SNAPSHOTS = 2 # Number of snapshots to keep
# Connect to EC2 in this region
connection = boto.ec2.connect_to_region('ap-southeast-2')
# Get a list of all volumes
volumes = connection.get_all_volumes()
# Create a snapshot of each volume
for v in volumes:
connection.create_snapshot(v.id)
# Too many snapshots?
snapshots = v.snapshots()
if len(snapshots) > MAX_SNAPSHOTS:
# Delete oldest snapshots, but keep MAX_SNAPSHOTS available
snap_sorted = sorted([(s.id, s.start_time) for s in snapshots], key=lambda k: k[1])
for s in snap_sorted[:-MAX_SNAPSHOTS]:
print "Deleting snapshot", s[0]
connection.delete_snapshot(s[0])
Just run it as a daily cron job.
|
Few of my critical EBS Volumes are being backed up as snapshots periodically. Is there any way I can setup a deletion policy by which ONLY the recent two snapshots are maintained?
For example:
In one of the environment I have close to 300 snapshots from 10 EBS Volumes. Once I have this policy it should come down to 20 Snapshot and be maintained at that level.
|
How to set AWS EBS Volume Snapshot deletion Policy?
|
1
Try to ask free migration assistance here https://sp.parallels.com/products/plesk/how-to-migrate/
Share
Improve this answer
Follow
answered Mar 19, 2015 at 5:32
IgorGIgorG
1,19166 silver badges88 bronze badges
Add a comment
|
|
I was using shared hosting with cPanel. Now I have bought dedicated server and Plesk web pro edition included with it free. Now I want to transfer my cPanel email accounts to Plesk panel. How can I do it?
Note: I don't have root access or SSH account on shared hosting.
According to this topic I may have root access to cPanel of old server.
|
How to transfer email accounts and email messages from cPanel to Plesk 12?
|
Use might be better off trying to use a 3rd party application which has many features like full system image backup or backup of a specific folder or drive. I would recommend Acronic True Image.
|
I am using the image backup feature in windows 8.1 to create a backup of my system. However. If I try to create a new one, the first one is overwritten. So i create a new one, rename it and create a second one. However, when I want to restore an image, I have only one image avaiable, the image that is not renamed.
So my doubt is, can I have multiple images on windows 8.1?
thank so much.
|
Can I have multiple image backup?
|
1
Use a version control system such as TFS, Subversion, PlasticSCM, git whatever. Seriously. Distributed VCSs like git or Mercurial will let you transport the whole repository easily.
If you insist on a pack&go approach, the ZIP tool of your choice will, most likely, support include / exclude rules based on file name patterns. For example, in Total Commander it's easy to exclude bin and obj folders.
Share
Improve this answer
Follow
answered Mar 2, 2015 at 23:08
Ondrej TucnyOndrej Tucny
27.8k66 gold badges7171 silver badges9393 bronze badges
Add a comment
|
|
I would like to make a backup copy of my Visual Studio 2013 MVC application which is only the source code. Such that I could open the solution on a new machine and have it compile after NuGet has downloaded the necessary packages and so on.
I realise that if the project was in TFS or similair I could go to the new machine and download it like that, however I am looking for a file copy solution.
Now while I could ZIP up the entire folder including binaries that seems like a sledge hammer approach. Having looked around there does not appear to be an easy way to do this. Has anyone got a solution or a utility I may have missed?
|
Backing up source code for a C# solution
|
If you want to use a file based backup solution for backup MySQL databases it is best to create a dump of the database and backup the dump. You can create a backup by mysqldump -u root -p --all-databases > dump.sql
you might also backup you /etc/my.cnf. Having the configuration makes restoring easier.
|
On a CentOS machine we have mediaWiki + bugZilla installed for internal uses.
I'd like to use the EMC Networker that there is in our network to backup the databases.
Is it enough to backup the /var/lib/mysql/ directory ?
And if yes, do i need to backup the whole directory (ibdata1, mysql, mysql.sock...) or only the mediawiki DB and bugzilla DB.
I saw in this post Backup Mysql Databases that
Blockquote
For innodb, you'll need to backup using mysqldump
Blockquote
Thanks
Sam
|
backup mysql with emc networker
|
$oky and $out are local variables. They are not set outside the function. $sdir, $name and $root are not defined within the function.
Method 1 - Parameters:
function backup($sdir,$name,$root,$salt) {
exec("tar -cvf $sdir/$name $root/* --exclude='$sdir/$salt' ", $out, $oky);
return array("oky"=>$oky, "out"=>$out);
}
$result = backup($sdir, $name, $root, $salt);
if (!$result["oky"]) {
echo $result["out"].": Backup Completed!";
} else {
echo $result["out"].": Backup Not Completed!";
}
Method 2 - Don't use a function:
exec("tar -cvf $sdir/$name $root/* --exclude='$sdir/$salt' ", $out, $oky);
if (!$oky) {
echo "$out: Backup Completed!";
} else {
echo "$out: Backup Not Completed!";
}
Method 3 - Global variables:
function backup() {
global $sdir,$name,$root,$salt,$oky,$out;
exec("tar -cvf $sdir/$name $root/* --exclude='$sdir/$salt' ", $out, $oky);
}
backup();
if (!$oky) {
echo "$out: Backup Completed!";
} else {
echo "$out: Backup Not Completed!";
}
|
I'd like to make a backup of my website using tar command and exec in Php and I wrote a small script to does that but nothing happens... where I fault? I have php 5.6.5 and hosting linux that has exec enabled and tar command available. Here is a Php example what I'd like to do.
<?php
$root = $_SERVER['DOCUMENT_ROOT'];
# root is /web/htdocs/www.example.com/home/
$name = "backup_" . date("[d-m-Y][H-i]") . ".tar.gz";
# name is backup_[25-02-2015][18-57].tar.gz
$skip = "*.gz";
# skip is the file I want to exclude (example: skip backup_[25-02-2015][18-57].tar.gz)
if ((substr($_SERVER['DOCUMENT_ROOT'],-1,1) == "/") && (substr($_SERVER['PHP_SELF'],0,1) =="/")) {
$sdir = $_SERVER['DOCUMENT_ROOT'] . substr(dirname($_SERVER['PHP_SELF']),1);
} else {
$sdir = $_SERVER['DOCUMENT_ROOT'] . dirname($_SERVER['PHP_SELF']);
}
# sdir is /web/htdocs/www.example.com/home/bak/ and is the path where the script lives
# out is the output
# oky is the success o failed exec command
function backup() {
exec("tar -cvf $sdir/$name $root/* --exclude='$sdir/$skip' ", $out, $oky);
}
backup();
if (!$oky) {
echo "$out: Backup Completed!";
} else {
echo "$out: Backup Not Completed!";
}
?>
Any help is appreciated!
|
Exec php tar command to backup website dynamically
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.