Response
stringlengths 8
2k
| Instruction
stringlengths 18
2k
| Prompt
stringlengths 14
160
|
---|---|---|
In addition to what knightpfhor wrote above you might also want to take a look at Getting Started with AzCopy - it is a Windows tool that helps you copy content to / from your local storage to Blob Storage. PowerShell and XPlatCLI are other options as well.
|
I have a trivial and simple task - I want to store some my data (documents, photos etc) on Azure as backup. Which type of service should I select? Store as Blob? But I want to save a structure of data (folders, subfolders etc). Azure Backup? It stores only archive data, I don't want to archive it all in one. DocumentDB? I not need to have features like return json format etc... What is the best way to store many (thousands) files (big and small) to save folder structure without archivation to one file (so, I want to have a simple way to get only one file quickly)
I use Windows 7.
|
Microsoft Azure - how to store backup data
|
Your virtual machine is saved under the form of multiple files, which you can easily back-up on an external hard-drive, or in the cloud. If you are using VMWare, then your machine will be split into .vmdk, .vmx, .vmxf, .vmsd and .nvram files, depending on your VM configuration.
Just check where you store the VM files, and back them up before re-installing the host system. Afterwards, just import the .vmx file back into VMWare.
In VMWare Player right-click on your VM, go to Settings, then Options, and under Working Directory you should see where your VM files are stored. Just back-up that entire folder before reinstalling.
|
In VM ware Virtual machine i have installed the Centos in Window 7.Now i want to re install my Window 7 but i do not want to loose my virtual machine Centos. I Google many time for this topic but did not find any helpful information.
Any help?
Thanks
|
How to backup and restore Virtual machine OS?
|
1
Duplicator would work great for this - I often use it when moving from one local environment to another. You would simply need to run the plugin, save the files to a flashdrive/cloud, then upload those files to your new machine.
It's also fairly simple to do manually. Simply export your database from phpMyAdmin and upload it into phpMyAdmin on your new machine.
Share
Improve this answer
Follow
answered Feb 10, 2015 at 18:49
JeremyEJeremyE
1,38844 gold badges2121 silver badges4141 bronze badges
1
Thanks,I tried duplicator, but i'm not getting any options for uploading on the other machine . . . I mean how to use the installer.php file on that computer?
– Avik
Feb 11, 2015 at 2:29
Add a comment
|
|
I made a website (70% done) in wordpress using xampp in my home PC,But now I need to move the existing work from my PC to my friend's PC to do the rest 30% work.
There are some plugins I found like duplicator , WPBackup etc. but through them it is possible to transfer the site to a server through ftp or some other way.
So I was wondering if there is any plugin or some other way to solve this issue, (like transferring a zipped folder to other machine)
Thanks in advance!!
|
Is it possible to move a WordPress site from localhost of one machine to localhost of another machine?
|
rsync setup through cron is the canonical way of doing this
|
is it possible to create backup saving some files (copy and replace) from a server to another, not necessary the same provider ?
Thanks !
|
Backup files from a server to another
|
1
NVM. I just created a java program and followed the instructions on http://db.apache.org/derby/docs/10.8/adminguide/cadminhubbkup01.html
Share
Improve this answer
Follow
answered Feb 7, 2015 at 3:09
auahmedauahmed
15111 gold badge44 silver badges1616 bronze badges
Add a comment
|
|
I wanted to write a groovy script preferably which when called will backup my database in the online procedure. I have found examples on how to do this in java but was confused with groovy script. Any Help?
Thanks
|
Backup Derby DB Groovy Script
|
Looks like forum posts are ignored when restoring a course backup from 1.9 to 2.x - https://docs.moodle.org/28/en/Course_backup#Backup_and_restore_from_1.9_to_2
An upgrade will probably work though... Install Moodle 1.9, then restore the course so its in the database. Check the forum posts are there. Then upgrade first to 2.2, then to 2.8 - https://docs.moodle.org/28/en/Upgrading_FAQ
|
Last year I taught an online course in my university Moodle website (back then, running Moodle 1.9) . There were a lot of forum posts in that course that I wanted to save for further analysis so I asked my university's technical staff a backup (.zip) of that course. So far, so good. I can import the zip file in my own computer where I have installed the latest version of Moodle (2.8.2), and I can see all my material BUT the forum posts. I think that it is due to the fact that my personal Moodle website hasn't any of the original users (names, ids, etc.) but I'm not sure.
Moreover, I can see that original forums' posts (the text that forms that posts) are in the moodle.xml (inside the backup).
Is there any way to import/restore that posts? Should I manually create new users that fit the ID's of the original ones to do so?
|
How can I restore forum posts from a Moodle backup into another server installed from scratch?
|
There is not --link option for docker exec. If you want to backup using a special script:
Create a new image db_backup starting from the postgresql one (the one that the db container uses), adding the backup script to some folder.
Do docker run --volumes-from db db_backup your_backup_script.sh.
|
I want to execute a command that uses commands from multiple containers.
E.g., I want to execute a backup script that used psql and pg_dump commands.
docker exec db_backup pg_dump
failed to exec: exec: "pg_dump": executable file not found in $PATH
docker run has an option --link. Is there a similar option for exec?
To clear this up, there are 3 containers:
my_app
db
db_backup
I want to use pg commands located in db from my db_backup scripts.
|
Docker: Run commands from multiple containers
|
Looking at the difference between your two commands you are running it in a different way, the first is being run from within the directory, the second is being run from the full path. My suggested next options would be;
Run the script with the full path from your command line and see if any errors are generated and then resolve those errors.
OR
Change the cron job to read like the following */1 * * * * cd /home/vijay/backups; bash pg_backup.sh.
Also, are you sure this script needs to be run every minute?
|
I used Common Postgresql backup script from Automated_Backup_on_Linux
It runs in the terminal
vijay@HCL:~/backups$ bash pg_backup.sh
But does not run when in CRONTAB in Ubuntu 12.04
*/1 * * * * /home/vijay/backups/pg_backup.sh
Not even logs error in /var/log/syslog
|
Postgresql Auto backup script runs in terminal but does not runs in CRON job
|
My vote's on VSS. The main reason is that it doesn't interfere with other processes modifying your files, thus it provides consistency. A possible inconsistency pretty much defeats the purpose of a backup. The API is stable and I wouldn't worry about its future.
|
I've been thinking about writing a small specialized backup app, similar to newly introduced file history in Windows 8. The basic idea is to scan some directories every N hours for changed files and copy them to another volume. The problem is, some other apps may request access to these files while they are being backed up and get an access denial, potentially causing all kinds of nasty problems.
I far as i can tell, there are several approaches to that problem:
1) Using Volume Shadow Copy service
From my point of view, the future of this thing is uncertain and it's overhead during heavy IO loads may cripple the system.
2) Using Sharing Mode when opening files
Something like this mostly works...
using (var stream = new FileStream("test.txt", FileMode.Open, FileAccess.Read,
FileShare.Delete | FileShare.ReadWrite | FileShare.Read | FileShare.Write))
{
[Copy data]
}
... until some other process request access to the same file without FileShare.Read, at which point an IOException will be thrown.
3) Using Opportunistic Lock that may be "broken" by other (write?) requests.
This behaviour of FileIO.ReadTextAsync looks exactly like what I want, but it also looks very implementation-specific and may be changed in the future. Does someone knows, how to explicitly oplock a file locally via C# or C++?
Maybe there is some simple C# method like File.TryReadBytes that provides such "polite" reading? I'm interested in the solutions that will work on Windows 7 and above.
|
Reading a file without causing access denial to other processes
|
Not really.
The easiest way to handle it is to tell the users to never shut it down. Admittedly that's not terribly useful depending on your environment.
You can scheduled a job to run on startup or on logon. That would theoretically catch your 9 AM window.
The hard one would be the 9PM backup. You may be able to use the "Wake the computer to run this task" option on the Conditions tab depending on your power settings. You could, theoreticlaly, use GPEdit.MSC to set a shutdown script to run the backup when the system is shutdown. But that has it's own special set of issues depending on if the backup location is local or remote. Additionally this method has the potential of users thinking the shutdown is taking too long and killing power.
|
I want to run a backup twice a day, but I'm unable to make assumptions on when the computer is turned on. As an example: If I create a task in the task scheduler to run a backup script at 9 AM and 9 PM, I have no guarantee that the backup will run, since the computer might be powered on at 10 AM and shutdown at 8 PM.
Is there an easy way to specify a time window in which a backup should take place within the windows task scheduler?
|
Windows Task Scheduler - specify time window, but only fire once
|
1
The script will execute the poweroff command.
If you want that to fail, you can temporarily change the /sbin/poweroff file permissions so it is not executable.
First, become root to gain full access to protected system files:
sudo su
Now remove the "is executable" permission from the poweroff binary:
chmod -x /sbin/poweroff
After the script is finished remember to revert this change, as otherwise you'll be left with a non-working poweroff command:
chmod +x /sbin/poweroff
Tested, works well.
Share
Improve this answer
Follow
answered Dec 29, 2014 at 1:26
unfaunfa
49311 gold badge66 silver badges1313 bronze badges
Add a comment
|
|
I have a backup script that executes the poweroff command to shutdown my machine once all backups are finished. However it happens that I want to deny the poweroff, but without killing the script process, as it would corrupt my backup and leave a bad mess.
How can I make the poweroff not execute without pausing or killing the script?
Here's my script:
#!/bin/bash
/unfa/Scripts/Daily\ backup/backup.sh > ~/backup-unfa.log; echo "data thread finished" &
/mnt/system-backup/system-backup.sh > ~/backup-system.log; echo "system thread finished" &
sleep 1m
while [[ -n $(pgrep rdiff-backup) ]]
do echo "Backing up yo data, dude..."; sleep 3s
done
echo "POWER OFF"
poweroff
|
How to deny a poweroff without killing a running script that will execute it?
|
I just used duplicati-b and the files moved over. :) It seems the prefix works as it is. No wildcard required.
|
I am using Duplicati to store backup of important documents on Glacier, however the problem is with Lifecycle Rule Prefix. Duplicati guide says use prefix duplicati-b* to move dblock files to Glacier. Basically it asks to move all files beginning with duplicati-b, its been two days but the rule is not working :(
Is the wildcard '*' all right ? Is there any guide for all prefix types ? I'm only getting simple prefixes that are meant for subfolders. Any help ?
https://i.stack.imgur.com/A5ncv.png
https://i.stack.imgur.com/dQnQf.png
|
Amazon S3 Lifecycle rules Prefix to move files to Glacier with certain naming convention
|
If both your alf_data and postgres directory is on the EBS, than a snapshot is sufficient.
You just need to know that a hot-backup (done while running Alfresco) could be inconstant: out of sync database & alf_data or inclomplete within a transaction.
A cold-backup is the best, take look at the Alfresco Wiki for more info.
Still when doing a hot-backup at night when there are no jobs running (ldap/cleanup/etc) it's doable.
|
I have an Alfresco community installation, hosted on Amazon Web Services, which I am using as a personal repository. I am starting having quite important docs stored within (roughly 2Gb), so I am thinking about how to implement a strong backup/restore strategy.
I have seen many tutorials and official docs, showing how to backup alfresco by backing up two directories, alf_data and the postgresql (or whatever database is used) directory.
The question: in the case of a default Alfresco installation, which means with an embedded database, I wonder if the following scenario is enough for being considered a good cold back up strategy. The starting point is of course stopping Alfresco, then one (or both) of the following.
Tar gz the whole alfresco installation directory and store in a safe place (at the moment Amazon S3).
Create an EBS snapshot with the amazon EC2 console
|
Back up the whole Alfresco installation directory, hosted on Amazon EC2 instance
|
The file cannot be saved because you are attempting to save the file name with your date formatted as "yyyy/mm/dd"? My computer will not allow me to save a file name with backslashes in it. Try changing the Format function to Format(Date, "yyyy-mm-dd").
|
I have a backup macro that runs every time when I save my excel file and saves a copy of the workbook into a folder.
Now I got a new computer where I use the same file, and it does not work anymore, I get run-time error 1004.
My co worker uses the same excel file and the same computer with another user and for him the macro works perfectly as it used to work for me on the other computer.
Code:
'backup
ora = ".h" & Hour(Now)
bufolder = ThisWorkbook.Path & "\excel_backups"
If Len(Dir(bufolder, vbDirectory)) = 0 Then
MkDir bufolder
End If
excfile = ThisWorkbook.Path & "\excel_backups\backup_" & Format(Date, "yyyy/mm/dd") & ora & "_" & ActiveWorkbook.name
If Dir(excfile) = "" Then
ActiveWorkbook.SaveCopyAs Filename:=bufolder & "\backup_" & Format(Date, "yyyy/mm/dd") & ora & "_" & ActiveWorkbook.name
End If
Edit: I get the error on line:
ActiveWorkbook.SaveCopyAs Filename:=bufolder & "\backup_" & Format(Date, "yyyy/mm/dd") & ora & "_" & ActiveWorkbook.name
It says:
Microsoft Office Excel cannot access the file '...'
There are several
possible reasons:
The file name of path does not exit. The file is being used by another
program. The Workbook you are trying to save has the same name as a
I don't think any of these problems may cause the problem.
Thank you for your time
|
run-time error 1004 on backup macro that worked before and works on other users
|
The best way to back up a MySQL Cluster is to use the native backup mechanism that gets initiated with the START BACKUP command in the `ndb_mgm.
Backup is easy (just a single command) and relatively quick. Restore is a bit more tricky, but is at least faster and more reliable than using mysqldump. See also:
http://dev.mysql.com/doc/refman/5.5/en/mysql-cluster-backup.html
and
http://dev.mysql.com/doc/refman/5.5/en/mysql-cluster-programs-ndb-restore.html
2) The backups are consistent snapshots and are distinguishable by an auto-incrementing backup ID, so having several snapshots is easily possible
3) The backup is clustered by default (every data node stores backup files on its own file system), but you should either have the backup directory pointing to a shared file system mount, or copy files form all nodes to a central place once a backup has finished
|
I have a mysql cluster database spread on 2 servers.
I want to create a backup system for this database based on the following requirements:
1. Recovery/restore should be very easy and quick. Even better if i can switch connection string at any time i like.
the back up must be like snapshots, so I want to keep copies of different day (and maybe keep the latest 7 days for example)
the copy database does not have to be clustered.
|
Best way of backing up mysql clustered database
|
1
The solution to this problem. You need to find adb.exe file in the Android ADK folder. It is inside platform-tools folder. And then you have to use terminal to go to the folder that has adb so that when you do ls, adb appears. Then you do the command but with ./ in the front
For example:
./adb shell bmgr run
instead of just adb shell bmgr run.
First check whether your device is connected using:
./adb device
From here you can force backup and restore for your app.
Share
Improve this answer
Follow
answered Dec 13, 2014 at 15:52
coolcool1994coolcool1994
3,72644 gold badges3939 silver badges4343 bronze badges
Add a comment
|
|
So I have implemented all the backup method following official android backup tutorial :http://developer.android.com/guide/topics/data/backup.html
But After I am done with everything, I cannot test it! The Android website says to use bmgr and use commands such as "adb shell bmgr run" etc, but I have no idea what this means...
I am using Eclipse to develop Android apps, and I am using real Samsung Galaxy Devices to test my apps. I am also backing up 1 file from the internal storage.
So how do I force the back up? Where can I write the command line (i can open DDMS)? What and where is this bmgr thing?
And does anyone know how long it takes for devices to actually backup data? Backing up data doesn't seem to happen immediately (if it happens at all) after you call:
BackupManager bm = new BackupManager(this);
bm.dataChanged();
TY
|
How to TEST Backup in ANDROID (perhaps using bmgr)
|
After more research and test i can say its not possible...
|
Im having a problem with my app when the user restore it into another iPhone, i made a fix and i need to test, so short question:
It is possible to backup an app installed from Xcode in iTunes and restore in another iPhone?
Thanks!
|
Backup app from Xcode
|
1
Secondary Namenode(SNN) was the first of numerous attempts to reduce NN load and to a certain extent provide H.A.
Since then there have been upgrades to SNN like Check Point Node, BackUp Node.
SNN: copies and merges the FSImage and edits.log periodically for faster NN startup times.
Check Point Node: Copies and merges the FSImage & edits.log. It then sends this updated version to the NN to replace the older FSImage.
BackUp Node: This however maintains a back up of all the alterations at the runtime without any delay. To achieve this all the streams are shared with both NN and BackUp Node, merges them both and sends it periodically to the NN for updation of NN's FSImage file. Hence providing the functionality that you ask for.
And as for the disadvantages of copying per second updates from the NN, it will create bottleneck on the Network traffic in a heavily loaded cluster.
Go through the below link to read more: https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html#Secondary_NameNode
Share
Improve this answer
Follow
answered Oct 10, 2015 at 3:55
user3032283user3032283
37322 silver badges1010 bronze badges
Add a comment
|
|
Everyone has known that Name Node can store metadata and every fraction of a second what happen everything stored in Log files. To identify the bugs log files only key factors. Now come to the point by default secondary Namenode can take a backup of metadata from Namenode periodically. Name space image, edit log files' will take a backup for the past one hour (configurable).
Why Secondary Namenode take one hour why it's not taking a backup for every second? Already every fraction of second stored in log file. Why Hadoop takes backup of log file for every fraction of a second? If configured like that any disadvantage? Please let me know deeply.
|
Why Hadoop secondary Namenode take backup for every one hour?
|
You can do that by telling the windows task scheduler to run the msdeploy command line recurrently.
Create a batch file with the following content:
set currentDate=%DATE:~10,4%-%DATE:~4,2%-%DATE:~7,2%
"c:\Program Files (x86)\IIS\Microsoft Web Deploy V3\msdeploy.exe" ^
-verb:sync ^
-source:webServer ^
-dest:archivedir=c:\iisBackup\%currentDate%
Running this batch should backup your complete IIS instance into c:\iisBackup.
Schedule the batch. Run:
schtasks -create -sc DAILY -tn BackupIis -tr PathToYourBatchFile.bat
|
I have a IIS 7 server which hosts 19 applications internally.
As part of daily backup process is there any way to automate the Export Server Package feature that can run on a daily basis at a specified time and dump the package zip file on a network drive.
Let me know if this is possible.
Thanks in advance.
|
Automate Export Server Package daily
|
1
You have a few options...for small amounts of data, you can use COPY to backup / restore from csv:
http://www.datastax.com/documentation/cql/3.0/cql/cql_reference/copy_r.html
For larger stuff, you've got the right link. You essentially take a snapshot (which puts it in the folder you mention), and then use something like tar to zip the files and output to a different directory. This is what we're doing in production... we clear the previous snapshot, take a snapshot and tar the folder to a network backup.
Share
Improve this answer
Follow
answered Nov 24, 2014 at 10:12
ashicashic
6,43555 gold badges3838 silver badges5555 bronze badges
4
Its mean you go to each table in kekpace, tar it and copy it to some other network location.
– Mudasar Yasin
Nov 24, 2014 at 13:15
yes...though with some clever clever use of find, etc. and some regex you can specify all (or a few keyspaces) and do the tarring in one go.
– ashic
Nov 24, 2014 at 13:41
3
This is the script I'm currently using. Not perfect (probably) but is getting the job done :) github.com/heartysoft/puppet-cassandra/blob/master/files/…
– ashic
Nov 24, 2014 at 13:42
Good idea to provide your script. This is pretty much what we do, too. Nicely done!
– Aaron
Nov 24, 2014 at 15:38
Add a comment
|
|
I am learning Cassandra now-a-days, i have successfully backup and restore tables or keyspace mentioned in this URL.
But i am looking for following options
1)Take complete backup of a keyspace at different location other then mentioned directory in cassandra.yaml. -t option create directory in snapshot folder not different HDD location.
2) Or backup/restore procedure same like mysql.
Thanks
|
cassandra backup like mysql
|
I'd recommend you take a look at the SQL Server backup to Microsoft Azure tool: http://www.microsoft.com/en-au/download/details.aspx?id=40740.
You should look at using Azure Backp for this scenario: http://azure.microsoft.com/en-us/services/backup/. Effectively install a backup agent locally and it will manage backups based on configuration (i.e. doing just delta backups).
|
I have my own VPS running Windows Server 2012 and MSSQL 2012 Express.
2 requirements I have:
I want to backup all databases to Azure
I want to backup images that were uploaded to the websites that are running on this VPS. Now instead of just copying ALL images (~ 15GB and growing) every day, I want to only backup new or changed images.
Are these 2 scenarios possible and if so, how?
|
Backing up databases and images from my VPS to Azure
|
I created a smaller backup by going to the options tab on the backup and set the compression to 'compress backup', I also chose Overwrite all existing backup sets and Backed up to a new media set name.
|
I am backing up a SQL Server 2008 R2 database with the intention of restoring it elsewhere. When I back it up using the backup wizard (right click tasks, backup ) the size of the resulting backup is much larger than I know the database should be (c45Gb when there is only about 5Gb of data)
How can I get a reasonably sized backup created to transfer and restore elsewhere?
Thanks,
Dan
|
Backup of SQL Server 2008R2 database is huge
|
With the basic command:
mysqldump -u myuser -pmypassword --databases wpzb watt > dump_file.sql
You should get all the data and schema in one file
It might be best to execute the command for each database separately:
mysqldump -u myuser -pmypassword --default-character-set=utf8 --single-transaction=TRUE wpzb > wpzb_dump_file.sql
This way you have the schema and data for one database in one file!
Since the database is local and on the default port you can just use:
-u myuser -pmypassword
And leave the localhost and port attributes off.
I tested with no-data=true in the my.ini file on Windows and running mysqldump now dumps no data.
my.ini
[client]
no-data=true
|
I have this command which is only creating statements to re-create the database and schemes, but it's not backing up any of the data:
"C:\Program Files (x86)\MySQL\MySQL Workbench CE 6.0.6\mysqldump.exe" --user=myuser --password=mypassword --host=localhost --port=3306 --result-file="Z:\mysql-backup\backup.%date:~10,4%%date:~7,2%%date:~4,2%.sql" --default-character-set=utf8 --single-transaction=TRUE --databases "wpzb" "wptt"
What am I missing?
|
MySQL mysqldump.exe is only backing up schemes
|
The solution is to add
. /home/db2inst1/sqllib/db2profile
before the db2 operations
|
I am trying to run a backup script to backup database Sample
su -c "${DB2Home}/db2 quiesce database immediate force connections" ${InstanceName}
echo "[INFO:`date`]Executing the backup command: ${DB2Home}/db2 backup database ${Primary_Server_DBName} to ${BackupFolder} compress without prompting "
su -c "${DB2Home}/db2 backup database ${Primary_Server_DBName} to ${BackupFolder} compress without prompting" ${InstanceName}
I executed the script as a root user not as db2inst1 user
I am getting the following error
[INFO:Tue Oct 21 18:44:23 IST 2014]Executing the backup command: /opt/ibm/db2/V10.5/bin/db2 backup database TIPDB to /home/db2inst1/backupFolder compress without prompting
DB21019E An error occurred while accessing the directory
"/root".
|
Db2 DB21019E backup script error
|
I'm not sure if I understand the problem right. (Why this backup folder is needed at all.) I think 'backup folder' means a folder in the solution explorer. So it may be that you have to switch the 'Build action' of the files in the backup folder to 'None'. (Right click on the files->Properties then see Property Window) (I hope those are the right terms in English; I have a German version of VS)
|
Ok, so I am converting code to C#, and I am using Visual Studios 2013. I have a backup folder with a copy of various files from the project in it because I cannot build the project unless they are in there. If the files are not in there, an error saying, "source file 'FolderPath\Backup\FileName' could not be found. So to solve that issue I put a copy of the file into that folder. However, when I do that I get the error 'Ambiguity between 'variable' and 'variable', because it seems to being reading the variable off the backup copy and the original copy. What am I doing wrong? Is there a way to get the compiler to not read the backup copies?
|
Backup Folder and Ambiguity of Definition of Variables
|
1
Generally using find like that is a bad idea - you are basically relying on separating filenames on whitespace, when in fact all forms of whitespace are valid as filenames on most UNIX systems. Find itself has the ability to run single commands on each file found, which is generally a better thing to use. I would suggest doing something like this (I'd use a couple of scripts for this for simplicity, not sure how easy it would be to do it all in one):
main.sh:
BASEDIR="$1" #I tend to quote all variables - good habit to avoid problems with spaces, etc.
DESTDIR="$2"
find "$BASEDIR" -type d -exec ./handle_file.sh \{\} "$BASEDIR" "$DESTDIR" \; # \{\} is replaced with the filename, \; tells find the command is over
find "$BASEDIR" -type f -exec ./handle_file.sh \{\} "$BASEDIR" "$DESTDIR" \;
handle_file.sh:
FILENAME="$1"
BASEDIR="$2"
DESTDIR="$3"
RELPATH="${FILENAME#"$BASEDIR"}" # bash string substitution double quoting, to stop BASEDIR being interpreted as a pattern
DESTPATH="${DESTDIR}/$RELPATH"
if [ -f "$FILENAME" ]; then
echo ln \""$FILENAME"\" \""$DESTPATH"\"
elif [ -d "$FILENAME" ]; then
echo mkdir -p \""$DESTPATH"\"
fi
I've tested this with a simple tree with spaces, asterisks, apostrophes and even a carriage return in filenames and it seems to work.
Obviously remove the escaped quotes and the "echo" (but leave the real quotes) to make it work for real.
Share
Improve this answer
Follow
answered Sep 27, 2014 at 16:05
MuzerMuzer
77444 silver badges1111 bronze badges
1
Good solution, but for finds that support it, print0 .... |xargs -0 myMvCmd ... ` can handle about any filename. Good luck to all.
– shellter
Sep 27, 2014 at 22:24
Add a comment
|
|
I need to create a clone of a directory tree so I can clean up duplicate files.
I don't need copies of the files, I just need the files, so I want to create a matching tree with hard links.
I threw this together in a couple of minutes when I realized my backup was going to take hours
It just echos the commands which I redirect to a file to examine before I run it.
Of course the usual problems, like files and directories containing quote or commas have not been addressed (bash scripting sucks for this, doesn't it, this and files containing leading dashes)
Isn't there some utility that already does this in a robust fashion?
BASEDIR=$1
DESTDIR=$2
for DIR in `find "$BASEDIR" -type d`
do
RELPATH=`echo $DIR | sed "s,$BASEDIR,,"`
DESTPATH=${DESTDIR}/$RELPATH
echo mkdir -p \"$DESTPATH\"
done
for FILE in `find "$BASEDIR" -type f`
do
RELPATH=`echo $FILE | sed "s,$BASEDIR,,"`
DESTPATH=${DESTDIR}/$RELPATH
echo ln \"$FILE\" \"$DESTPATH\"
done
|
Is there a utility for creating hard link backup?
|
http://www.percona.com/doc/percona-xtrabackup/2.1/innobackupex/innobackupex_option_reference.html to option --apply-log
-apply-log Prepare a backup in BACKUP-DIR by applying the transaction log file named xtrabackup_logfile located in the same directory. Also,
create new transaction logs. The InnoDB configuration is read from the
file backup-my.cnf created by innobackupex when the backup was made.
innobackupex –apply-log uses InnoDB configuration from backup-my.cnf
by default, or from –defaults-file, if specified. InnoDB configuration
in this context means server variables that affect data format, i.e.
innodb_page_size, innodb_log_block_size, etc. Location-related
variables, like innodb_log_group_home_dir or innodb_data_file_path are
always ignored by –apply-log, so preparing a backup always works with
data files from the backup directory, rather than any external ones.
|
I'm trying to understand XtraBackup and I can't really get the difference between creating a backup and preparing it. Preparing for what? I've created a backup, what for is the preparation? Does it anyway need to be prepared? E.g. here
|
Percona XtraBackup "create" vs "prepare" backup
|
Well, to start a large portion of your regex replaces probably aren't working, you need to escape most of them...for example "\". Anyways you can shorten up the whole replace to one expression like this:
-replace '[*"#¤&()=?´`|@£${\[\]}^~¨*<>\\_;.!]','Æ'
#query to show it working
'*"#¤&()=?´`|@£${[]}^~¨*<>\_;.!' -replace '[*"#¤&()=?´`|@£${\[\]}^~¨*<>\\_;.!]','Æ'
expanding on that here is how you would get it to only backup if you modify the file:
(Get-ChildItem "C:\Users\Administrator\Desktop\Eurocard\SEB\*.*" -recurse).FullName |
Foreach-Object {
$Content = (Get-Content $_ -Raw)
$Regex = '[*"#¤&()=?´`|@£${\[\]}^~¨*<>\\_;.!]'
If ($Content | Select-String $Regex -Quiet)
{
$Content -Replace $Regex,'Æ'
<#
rest of code block such as copies, backups, renames whatever would go here.
This way it is only taking place if the file has an unwanted character and is
modified
#>
}
}
|
Right now, i am looking to improve my code to use less space and be more intelligent, i need the below code to only backup files IF it's modified by the Find & Replace, right now i'm doing a backup of everything and overwriting old back-ups.
The next thing i would like would be to NOT overwrite the backups, but instead give them a number, so if there are 2 of the same backups in the "backup" folder it would look like this:
Filebackup.DCN3 -> Filebackup1.DCN3
So i always have the original file.
get-childitem -path "C:\Users\Administrator\Desktop\Eurocard\SEB" -filter *.* -recurse | copy-item -destination "C:\Users\Administrator\Desktop\Eurocard\Backup"
(Get-ChildItem "C:\Users\Administrator\Desktop\Eurocard\SEB\*.*" -recurse).FullName |
Foreach-Object {
(Get-Content $_ -Raw).
Replace('*','Æ').
Replace('"','Æ').
Replace('#','Æ').
Replace('¤','Æ').
Replace('&','Æ').
Replace('(','Æ').
Replace(')','Æ').
Replace('=','Æ').
Replace('?','Æ').
Replace('´','Æ').
Replace('`','Æ').
Replace('|','Æ').
Replace('@','Æ').
Replace('£','Æ').
Replace('$','Æ').
Replace('{','Æ').
Replace('[','Æ').
Replace(']','Æ').
Replace('}','Æ').
Replace('^','Æ').
Replace('~','Æ').
Replace('¨','Æ').
Replace('*','Æ').
Replace('<','Æ').
Replace('>','Æ').
Replace('\','Æ').
Replace('_','Æ').
Replace(';','Æ').
Replace('.','Æ').
Replace('!','Æ')|
Set-Content $_
}
Is there anyone who can help with this ?
|
Powershell Find Replace & Backup
|
All you want to take backup of your database. Below commands should do a work for you.
On Linux
/usr/bin/mysqldump -u{USERNAME} -p -h{HOST_NAME} {DATABSE_NAME} > {FILE_NAME}.sql
On windows
C:\Program Files\MySQL\MySQL Server 5.6\bin\mysqldump -u{USERNAME} -p -h{HOST_NAME} {DATABSE_NAME} > {FILE_NAME}.sql
Try below code snippet to fix your script:
$sql = "SHOW DATABASES";
$query = mysql_query($sql, $connect);
$num_rows = mysql_num_rows($query);
echo "Baze de date:" . $num_rows;
while ($row = mysql_fetch_assoc($query)) {
$sql2 = "SHOW TABLES FROM " . $row['Database'];
$query2 = mysql_query($sql2, $connect);
echo "<h3>" . mysql_num_rows($query2) . " Tabele in: " . $row['Database'] . "</h3>";
$sql3 = "CREATE DATABASE `".$row['Database']."_backup`";
echo '<br> $sql3 = > ' . $sql3 . "<br><hr />";
$query3=mysql_query($sql3,$connect);
while ($row2 = mysql_fetch_assoc($query2)) {
foreach ($row2 as $rand2) {
$sql4 = "CREATE TABLE `".$row['Database']."_backup`.`".$rand2."` SELECT * FROM `".$row['Database']."`.`$rand2`";
echo '<br> $sql4 = > ' . $sql4 . "<br><hr />";
$query4=mysql_query($sql4,$connect);
}
}
}
|
I try to create a backup script, with PHP and MySQL, but I have some problems. This is what I have done so far:
$sql="SHOW DATABASES";
$query=mysql_query($sql,$connect);
$num_rows=mysql_num_rows($query);
echo "Baze de date:".$num_rows;
while ($row = mysql_fetch_assoc($query)) {
$sql2="SHOW TABLES FROM ".$row['Database'];
$query2=mysql_query($sql2, $connect);
echo "<h3>".mysql_num_rows($query2)." Tabele in: ".$row['Database']."</h3>";
while( $row2 = mysql_fetch_assoc($query2) ) {
foreach($row as $rand) {
$sql3="CREATE DATABASE `'$rand'_backup";
$query3=mysql_query($sql3,$connect);
foreach($row2 as $rand2) {
$sql4="CREATE TABLE `'$sql3'_backup`.`'$rand2'` SELECT * FROM `'$rand'`.`'$rand2'`";
$query4=mysql_query($sql4,$connect);
}
}
}
}
As you can see, first I list the existing databases and the tables of each database. Then I make two foreach, so it can create a database for each database listed, and so on for the tables. But is not working.
Any idea of what seems to be the problem?
EDIT:I know that mysql is deprecated,but i have to use it.
|
How to create a backup for database
|
for /f "tokens=*" %%G in ('dir /b /a:d "%Source%"') do (
md %destination%\%%G
svnadmin hotcopy %Source%\%%G %Destination%\%%G
)
The problem with the old solution was instead of getting folder names in directory it gets subdirectory of the directory. This is why the result was something like C:\where\the\backup\will\be\C:\where\the\folders\are
That was the reason that I got error:The filename, directory name, or volume label syntax is incorrect. The script part (works fine) that I have shared with you above in answer tokenize the subdirectory to get the folder name and use it.
|
My main goal is to backup SVN repositories in REPOS folder. Since "svnadmin hotcopy" has to have both source and target folders I need to create new folders in different directory with name of folderName_backup and then copy them using "svnadmin hotcopy". Btw this has to be done in windows batch file. My code for this portion is the following:
for /d %%X in (%source%\*) do (
md %destination%\%%X_backup
svnadmin hotcopy %%X %destination%\%%X_backup
)
After running this code I get error : The filename, directory name, or volume label syntax is incorrect.
|
How can I create new empty folders in a directory using names of subfolders of another directory using batch script?
|
1
Online searches show this could be related to:
1. 64bit of Access vs 32bit version
2. The version of access you are running, if it is not patched
See related question:
Simular Stackoverflow Question
Share
Improve this answer
Follow
edited May 23, 2017 at 12:13
CommunityBot
111 silver badge
answered Aug 27, 2014 at 5:42
Kyle VassalloKyle Vassallo
3611 bronze badge
2
Kyle: Would you be in a position to recommend a solution
– Shakti
Aug 27, 2014 at 5:49
I would suggest first ensuring you have all MS patches for Access for the current version you are running. If that does not work, spin up a VM with either 64bit or 32bit access (whichever you are NOT running) fully patch it, then see if the problem persists.
– Kyle Vassallo
Sep 14, 2014 at 18:10
Add a comment
|
|
I have uploaded the MS-Access database at a shared drive location in a Windows folder. For couple of days, the database works fine and then it automatically starts creating backup copies of the database every time users try to use the database. While the backup copies are created the size of parent database gets reduced from 10 Mb to 150-200 Kb.
When users try to open the database, they get the message -"Unrecognized database format '\10.10.5.7\Database\DB-R.accdb'
Any suggestions!!
|
MS Access Database Backup
|
Generally the easiest way to keep files for just 7 days is just to name them .mon, .tue, etc. Then you just overwrite the previous week's file every Monday, Tuesday, etc.
|
i got a bash script from internet, its look good.
its already backup, upload FTP, delete old backups older than 7 days.
but its not delete old backup older than 7 days in remote FTP
#!/bin/sh
Mdate="$(date +"%d-%m-%Y")"
mysqldump -uroot -pPassword asia stats | gzip > /home/backup/asia_$Mdate.$
cd /home/backup/
ftpserver="ftp.drivehq.com"
ftpuser="username"
ftppass="password"
ftp -n -i $ftpserver <<EOF
user $ftpuser $ftppass
cd backupstats
mput asia_$Mdate.gz
quit
EOF
find /home/backup/asia_*.gz -maxdepth 1 -type f -mtime +7 -delete
the example backup name will like "asia_17-08-2014.gz"
Thanks in advance for the help.
|
Bash script to delete files from remote FTP older than 7 days
|
You can use flashback technology to unload changes from specific table to specific point or time.
http://docs.oracle.com/cd/B19306_01/backup.102/b14192/flashptr003.htm
|
We have a table equipment_details which would require a refresh every month as part of a release. These are reference tables, maintainng the details of the equipments. There are other tables which maintains the order details, having foreign key reference to this table.
The requirement is to update new data in to equipment_details, as part of a release, and if at all required, we should be able to rollback to the previous state before update. The rollback plan\scripts should remain consistent across releases.
The two approaches we are considering are,
To backup equipment_details as say equipment_details_2.0, create a copy of the table along with the data as equipment_details, update it with the new equipments. If required to be rolled back, rename the table so as to go to the previous state. I believe this would have challenges wrt to maintaining the foreign key references.
To backup only the table data, structure remainng the same and then if required to be rolled back after updating, restore the table with the backed up date.
Or is there any other way this can be dealt with? Any suggestions? Thanks.
|
Backing up and restoring table date in Oracle 10g
|
Apache Subversion is a version-control system, i.e. a time machine for your code, not a backup tool. While SVN can act as a backup system, it provides much more than just backups and opposed to Git -- Subversion is a real time machine where you can revert to a particular point in time and see your code as it was in 2012/02/11 02:11AM.
It's awkward not to use a version-control system in 2014. VCS can be your best friend which will help you a lot after you learn how to use it. You may want to learn more about version-control by reading the Bible of Subversion -- SVNBook.
VisualSVN Server is installed on your machine locally, so if you lose your computer (i.e. hardware malfunction leading to complete data loss) then your versioned data is lost. To ensure your code is safe, you can configure VisualSVN Server to work with repositories which are hosted on a network share (e.g. NAS device) and consider making scheduled backups of your repos. Using a USB drive to host your repos is also a solution but for "one-man dev team" only.
You can install VisualSVN Server on a Windows Azure virtual machine.
You can try some hosted Subversion solution, however I'd avoid it. Recent attack on CodeSpaces makes me think that using shared hosting for source code is not the best choice for anyone.
|
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I want to back up my code online and I tried using subversion with a visual SVN Server. It says (local) on visual SVN. Does this mean that if my computer breaks I would lose everything? What should I do to back it up?
|
How to back up code online using SVN [closed]
|
1
I used Deja Dup which is installed in Ubuntu by default.
It's actually a front-end of Duplicity which you can use if you prefer command line approach.
Of course, there are several other solutions so take a look on a list
here
Share
Improve this answer
Follow
answered Jul 23, 2014 at 8:16
TodorSTodorS
40644 silver badges44 bronze badges
Add a comment
|
|
Can anyone suggest a linux backup utility that can take a copy of the state with the current configuration so I can boot with it or install it on an external/new partition on a different machine?
|
Linux image backup utility
|
1
You can do it like this:
String path = "/yourFolder";
String[] chmod = { "su", "-c","chmod 777 "+path };
try {
Runtime.getRuntime().exec(chmod);
} catch (IOException e) {
e.printStackTrace();
}
Share
Improve this answer
Follow
answered Jul 9, 2014 at 11:28
SolenyaSolenya
69477 silver badges2121 bronze badges
1
Thank you, It's has been chmod but the app restored not working. :(
– Tho Tran
Jul 11, 2014 at 2:34
Add a comment
|
|
I make an application to backup & restore app and data with roottools.
I backup folder data/data/com.example.anotherapp to sdcard. Now, I uninstall/reinstall and copy folder com.example.anotherapp to data/data/ but app can't run.
How to chmod folder com.example.anotherapp?
public static boolean restoreDatabase(Context context, AppManagerItem item) {
String from = "/data/data/";
File fileOut = new File(Environment.getExternalStorageDirectory(),
context.getString(R.string.app_name));
String to = fileOut.getAbsolutePath() + File.separator
+ item.getPackageName();
to = normalizeParameter(to);
boolean isSuccess = false;
String comando = "cp -r " + to + " " + from;
Process suProcess;
try {
suProcess = Runtime.getRuntime().exec("su");
DataOutputStream os = new DataOutputStream(
suProcess.getOutputStream());
os.writeBytes(comando + "\n");
os.flush();
os.writeBytes("exit\n");
os.flush();
try {
int suProcessRetval = suProcess.waitFor();
if (255 != suProcessRetval) {
// Acceso Root concedido
isSuccess = true;
} else {
// Acceso Root denegado
isSuccess = false;
}
} catch (Exception ex) {
Log.w("Error ejecutando el comando Root", ex);
}
String chmod = "chmod -R 757 " + from + item.getPackageName();
Runtime.getRuntime().exec(chmod);
} catch (IOException e) {
e.printStackTrace();
}
return isSuccess;
}
Thank you.
|
How to chmod folder data/data/com.example.anotherapp
|
This is a function I use, to perform a database backup.
function _backup_db()
{
$this->load->dbutil();
$this->load->helper(array('file', 'download'));
$backup =& $this->dbutil->backup();
$filename = 'backup-' . time() . ' .zip';
write_file('/backups/' . $filename, $backup);
force_download($filename, $backup);
}
The only thing I can notice that's different is the file name in the backup() function. According to the user guide, you only need to add the file name to the backup() array, if it's a .zip file.
http://ellislab.com/codeigniter/user-guide/database/utilities.html#backup
|
This my controller function in which i used database util library to crate backup for database. When i download a backup the zip file cannot be opened with zip extractor. What should I do?
function backup_database() {
$file_name = 'accounts';
$date = date('@Y.m.d-H.ia');
$name = $file_name . $date;
// Load the DB utility class
$this->load->dbutil();
// Backup entire database and assign it to a variable
$backup = & $this->dbutil->backup(array('filename' => "$name.sql"));
// Load the file helper and write the file to server
$this->load->helper('file');
write_file("$name.zip", $backup);
// Load the download helper and send the file to desktop
$this->load->helper('download');
force_download("$name.zip", $backup);
}
|
Codeigniter Database backup not returning readable file
|
quick edit should fix it:
@echo off
cls
echo Date format = %date%
echo dd = %date:~0,2%
echo mm = %date:~3,2%
echo yyyy = %date:~6,4%
echo.
echo Time format = %time%
echo hh = %time:~0,2%
echo mm = %time:~3,2%
echo ss = %time:~6,2%
echo.
set timestamp=%date:~6,4%-%date:~3,2%-%date:~0,2%-%time:~0,2%-%time:~3,2%-%time:~6,2%
pushd "C:\Program Files\MySQL\MySQL Server 5.5\bin"
mysqldump --user=root --password=***** leaverequest>"c:\backup\backup-%timestamp%.sql"
|
I'm trying to add a timestamp to a mysql database dump file with a .bat file but it's not going too well. The timestamp isn't added - i just get a backup-.sql file. Any tips?
My file:
@echo off
cls
echo Date format = %date%
echo dd = %date:~0,2%
echo mm = %date:~3,2%
echo yyyy = %date:~6,4%
echo.
echo Time format = %time%
echo hh = %time:~0,2%
echo mm = %time:~3,2%
echo ss = %time:~6,2%
echo.
echo Timestamp = %date:~6,4%-%date:~3,2%-%date:~0,2%-%time:~0,2%-%time:~3,2%-%time:~6,2%
pushd "C:\Program Files\MySQL\MySQL Server 5.5\bin"
mysqldump --user=root --password=***** leaverequest>c:\backup\backup-%timestamp%.sql
|
Add timestamp to filename with msqldump from batch file
|
I dont think that anything like was.drupal exist in the directory structure of drupal.
You can refer:
Directory Structure
Drupal 7 Folder Structure & important files
Drupal: How to structure your modules directory
|
I'm working on a Drupal installation followed till now by a person I cannot contact anymore. Inside my Drupal directory there's a "was.drupal" directory. Does someone know what it could be? I'm pretty sure it is generated by some kinf od drupal tool, but I don't know what.
|
What is this "was.drupal" directory?
|
I know this question is a bit old, and you may have already found a solution, but just in case other people stumble on this question:
Heroku really is storing the files where it says it is. What's happening when you run heroku run bash is Heroku is spinning up a one-off dyno to run the command. This means that you will not be given a command prompt in the dyno that is actually running your app. This is why you are not able to find the file you're looking for.
There are currently no official addons that support backing up physical files (only databases), however you could write your own custom script to back up your data to where ever you choose (s3 or otherwise). To do so, you will likely need to use Heroku Scheduler to run your backup script in a cron-like way.
|
I have a Redmine (www.redmine.org) installation pushed up onto Heroku (cedar stack). On my local instance of Redmine, the way file uploads work is that the database simply stores some data about the file including a name and the location of the file on disk, and the file itself is just stored on disk under [app-location]/files (Redmine is a ruby-on-rails application). When my Redmine project is pushed to Heroku, the files directory is nowhere to be found. From what I've read about Heroku's filesystem, this is no surprise. But what is surprising and confusing, is that file uploads still work and I didn't setup s3 which is the common recommendation for file uploads on Heroku. I checked the Heroku database to get the data about the file upload.
Here are the steps I took to locate the file.
heroku run rails c
and – to get the location of the most recent file – ran:
Attachment.last.diskfile
which returned:
=> "/app/files/2014/06/140610184025_Very-Basic-Globe-icon.png"
This path simply does not exist on the Heroku instance (using heroku run bash and listing directories or running a find). I also downloaded a dump of the Heroku database and imported it locally. The database data shows up on my local instance, but the file can't be found (no surprise).
So my questions are:
Where is the Heroku instance storing the files really?
Is there a way for me to back those files up locally without relying
on Amazon s3?
This app should remain fairly small, so I am not concerned about massive scalability, I just want to be able to get the file uploads if one day needed.
|
Heroku file backup without s3
|
1
I'm the author of hobocopy. It is written to expect source folder, destination folder, and a file selector. So you're not going to be able to use a full path. That said, you can use the flags that can be found by running "help for" at the command prompt to break apart the path you find into directory and file components. Something like %~nf in your case, I believe.
Share
Improve this answer
Follow
answered Jun 5, 2014 at 12:42
Craig AnderaCraig Andera
42144 silver badges33 bronze badges
Add a comment
|
|
Morning All,
We're currently running some software for our users which is failing to copy their PST's on to our servers, it's too intrusive, requires client side software and configuration and its paid for!
I'm used to free/open source software and love command line batch files compared, as I find them easier to automate and also add to scheduled tasks without worrying about users input.
I have found hobocopy which works great! - but only if you list : source folder, dest folder and then file type... my script searches the C:\ drive, finds PST files and lists the full file path. Hobocopy doesn't seem to handle this.
Below is my script:
@echo off
REM ### COPY HOBOCOPY TO WINDOWS DIR #####
if not exist C:\windows\Hobocopy.exe xcopy \\icao-supp-01\support\hobocopy\hobocopy.exe C:\windows
REM ### SCAN SYSTEM FOR LOCAL PST FILES ####
dir *.pst /s /b > C:\temp\pst.txt
REM ### RUN HOBOCOPY TO COPY PST FILES ####
For /f %f in (C:\temp\pst.txt) do hobocopy /y %f P:\
**HERE IS THE OUTPUT OF C:\TEMP\PST.TXT
C:\Jdeane.pst
C:\Games\IGNORE1.pst
C:\Windows\ModemLogs\fake2.pst**
It wont copy the file paths e.g. :
hobocopy /y C:\Jdeane.pst P: wont work. However, hobocopy /y C:\ P:\ *.pst would work.
My goal:
Search the C:\ drive for PST files and then have them backed up on a schedule to the servers.
Thanks in advance!
(PS. Were running Windows 7 x64 and outlook 2010 if it makes a difference, and the users will NOT save their PST's to our servers).
|
Use Hobocopy to backup .PST's on system (Using exact path)
|
No, not using ftp.
You could backup on a hard mounted nfs volume.
|
I'm using Oracle 11g XE and I want to make the backup files in an other space that I can connect to it just via ftp.
Is it possible to run a script like :
ALTER SYSTEM SET DB_RECOVERY_FILE_DEST = 'ftp: // user:password @ ftpserver_url';
or Is there any other way to do it ?
If not, How can I transfer the backup files to the other space automatically via ftp ?
|
Make the oracle backup files in other space via ftp
|
I asked the same thing on the MSDN forums. Apparently, as of this date, there is no API that can be used to pull in that information.
Source: reply from MSDN forums
|
I'd like to consume information about the Recovery Services that I have setup in Azure. If I go to that section inside the Microsoft Azure Management Portal, I'll see a list of the Backup Vaults that I have created. I'm looking for an API that will let me pull up data about it, such as the one that is presented on the Dashboard:
- Name
- Status
- Location
- Storage used/left
- etc.
So far, I've only been able to find their Storage Services REST API.
Thank you
|
Does Microsoft Azure have a REST API to view information about Backup Vaults?
|
This is a way of getting a DateTime string independent of localization:
for /f "tokens=1 delims=." %%i in ('wmic os get localdatetime^|find "."') do set dt=%%i
This is format YYYYMMDDHHMMSScc
You can shorten this string to your needs. for example:
set dt=%dt:~0,8%
will set it to YYYYMMDD only.
set dt=%dt:~0,12%
will set it to YYYYMMDDHHMM
Then you can copy like this:
xcopy "\MY_SERVER_IP\SharedDRIVE\TEST*" "C:\Test_Folder_%dt%" /D /E /C /I /H /Y
|
I am trying to backup files from a network drive to my C: drive. for the copy I am using:
xcopy "\MY_SERVER_IP\SharedDRIVE\TEST*" "C:\Test_Folder" /D /E /C /I /H /Y
Is there any way I can have the date added onto this and the information not replace but instead compound and have a daily record of changes made to the files but not lose older copies? This will work for copying the most current information, but I need many months of records not just the most current copy.
Any suggestions?
Thanks in advance
|
Copying backup files from a network drive daily, need different dated copies for the end of each day
|
1
imapsync perhaps, although that copies one user's mail, not the entire server.
Share
Improve this answer
Follow
answered Apr 21, 2014 at 12:32
arntarnt
9,34255 gold badges2424 silver badges3434 bronze badges
2
Ok, but server 1 and 3 it's shared server, no dedicated or vps. My server it's dedicated. Will it work?
– viko
Apr 21, 2014 at 12:46
imapsync works in that setting. Follow the link, have a look.
– arnt
Apr 22, 2014 at 7:50
Add a comment
|
|
I must archive to email with 1 server to 2 server temporarily and upload to 3 server. Why? My client is changing the server. Resigns from 1 server and want to buy a server 3. Server 2 is my. How can I make remote copies of IMAP?
old server -> my server -> new server...
|
Remote backup imap
|
1
This should work in Vista and higher - it uses Robocopy (don't call it robocopy.bat).
Be very careful when specifying the target directory because the /mir option will create a mirror copy and delete files that don't exist in the source tree.
The /mir switch will wipe a drive if the root directory is specified as the target.
The first four lines of this code will give you reliable YY DD MM YYYY HH Min Sec variables in XP Pro and higher.
@echo off
for /f "tokens=2 delims==" %%a in ('wmic OS Get localdatetime /value') do set "dt=%%a"
set "YY=%dt:~2,2%" & set "YYYY=%dt:~0,4%" & set "MM=%dt:~4,2%" & set "DD=%dt:~6,2%"
set "HH=%dt:~8,2%" & set "Min=%dt:~10,2%" & set "Sec=%dt:~12,2%"
set "datestamp=%YYYY%%MM%%DD%" & set "timestamp=%HH%%Min%%Sec%"
set "fullstamp=%YYYY%-%MM%-%DD%_%HH%-%Min%-%Sec%"
robocopy "d:\Assignment" "d:\backup\Assignment %fullstamp%" /mir
It just remains for you to test this and set it up in Task Scheduler.
Share
Improve this answer
Follow
answered Apr 13, 2014 at 17:30
foxidrivefoxidrive
40.7k1010 gold badges5656 silver badges6969 bronze badges
2
thanks but didn't work for me but found another solution. Thanks
– Faisal Naseer
Apr 13, 2014 at 17:44
If you open the cmd window and type the batch file name then you will see any error messages on the console. I notice now that you are creating backups on the same drive, but if the drive fails then ALL your backups will disappear too. Back them up to different media - and have two separate copies of those backups on different media. DVD/CD/USB stick/External drive etc.
– foxidrive
Apr 13, 2014 at 22:59
Add a comment
|
|
How can I make a batch file to backup a directory on windows scheduler to automatically update on a specified time everyday e.g 6 pm and save the directory with the directory name along with the date and time the backup was made.
I have searched a particular command for backup using cmd.
backup d:\Assignment\*.* d:\backup /s
but it is not recognized as any internal or external command.
the directory structure is.
source: d:\Assignment
Destination: d:\backup\
|
Backup directory using cmd with windows scheduler
|
after change pg_hba.conf, you shold reload or send a SIGHUP signal to postmaster pid. so that change applyed.
why not use psql -f to execute the backup sql file?
or you can use pg_dump backup and pg_restore restore. or copy command backup and restore.
LIKE :
digoal=# copy tbl_join_1 to '/home/pg93/tbl_join_1.dmp';
COPY 10
digoal=# delete from tbl_join_1;
DELETE 10
digoal=# copy tbl_join_1 from '/home/pg93/tbl_join_1.dmp';
COPY 10
OR
pg93@db-172-16-3-150-> pg_dump -f ./tbl_join_1.dmp -t tbl_join_1
pg93@db-172-16-3-150-> psql
psql (9.3.3)
Type "help" for help.
digoal=# drop table tbl_join_1;
DROP TABLE
digoal=# \q
pg93@db-172-16-3-150-> psql -f ./tbl_join_1.dmp
SET
SET
SET
SET
SET
SET
SET
SET
SET
CREATE TABLE
ALTER TABLE
ALTER TABLE
|
I'm trying to restore a database from backup but I can't connect to postgresql.
namespace :db do
task import: :environment do
import_path = "~/backups"
sql_file = "PostgreSQL.sql"
database_config = Rails.configuration.database_configuration[Rails.env]
system "psql --username=#{database_config['username']} -no-password # {database_config['database']} < #{import_path}/#{sql_file}"
end
end
I tried changing the pg_hba.conf file (peer to md5).
In the console I tried the same thing with the super user postgres, but it still fails.
BTW, does anyone know a better way to restore a database? I used the backup gem.
EDIT:
I restarted the postgresql server and the passed the authentication. But, didn't restored the db. I reverted the changes in the file and just added -h localhost to the psql command. The database restores now. The only errors I get now are:
must be owner of extension plpgsql //and
no privileges could be revoked for "public"
|
psql: FATAL: Peer authentication failed for user "expman"
|
1
I would start with this:
@ECHO off
SET "7ZIP=c:\Program Files\7-Zip\7za.exe"
SET "FROM=C:\A\move\Logs"
SET "TO=C:\A\move\moved"
SET OUTPUT=output.log
SET DD=%DATE:~7,2%.%DATE:~4,2%.%DATE:~-4%
if not exist %7ZIP% ECHO No 7z && GOTO :END
ROBOCOPY %FROM% %TO% /MOVE /S /MINAGE:5 /log+:%OUTPUT%
for /d %%X in (*) do (
"%7ZIP%" a -tzip "LOG_%DD%_%%X_Backup.zip" %%X
)
:END
pause
Share
Improve this answer
Follow
edited Feb 19, 2014 at 16:00
answered Feb 14, 2014 at 22:26
djangofandjangofan
29k6161 gold badges200200 silver badges296296 bronze badges
2
I'm still not able to get it to call 7zip to do its thing. If I take out the 7zip functionality. I still am able to get it to do about half of what I need. I also changed the Minage to 14 as the files I'm working with are getting older... :) @ECHO off SET "FROM=C:\A\move\Logs" SET "TO=C:\A\move\moved" SET OUTPUT=output.log SET DD=%DATE:~7,2%.%DATE:~4,2%.%DATE:~-4% ROBOCOPY %FROM% %TO% /MOVE /S /MINAGE:14 /log+:%OUTPUT%
– holemt
Feb 19, 2014 at 15:50
You can get it working. It is just a simple matter of getting the command line options to the 7za command correct. So, just fix those things and try again. I made edits to the above script; please take note.
– djangofan
Feb 19, 2014 at 16:01
Add a comment
|
|
This is what I currently have:
ROBOCOPY C:\A\move\Logs C:\A\move\moved /MOVE /S /MINAGE:5 /log+:output.log
for /d %%X in (*) do (
"c:\Program Files\7-Zip\7z.exe" a "LOG"%DATE:~7,2%.%DATE:~4,2%.%DATE:~-4%Backup.zip" "%%X\"
pause
I am having a some trouble trying to setup the batch to zip the Destination folder into a zip file using 7zip. any suggestions or help?
|
Robocopy batch file to move and zip folder and output log
|
1
The ALM Rangers publish a TFS Planning Guide which has a section on how to approach DR with TFS: http://vsarplanningguide.codeplex.com/
For DR it expects you to restore to a machine with the same name. If you want to move TFS to a different machine, the recommended approach is to detach the Team Project Collections from within TFS Admin Console, then re-attach on a different TFS Instance on a different machine.
Share
Improve this answer
Follow
answered Jan 25, 2014 at 14:46
Dylan SmithDylan Smith
22.2k22 gold badges4848 silver badges6262 bronze badges
3
What about other settings in tfs_configuration ? Does it work if i restore tfs_configuration of one pc to another?
– IT researcher
Jan 27, 2014 at 5:08
If you want to completely recreate the TFS environment on another PC, you can restore the TFS Backups then make sure to run the tfsconfig changeserverid command. See this blog post: blogs.msdn.com/b/buckh/archive/2006/10/17/…
– Dylan Smith
Jan 27, 2014 at 16:07
After installing TFS power tool from visualstudiogallery.msdn.microsoft.com/… .I didn't get database backup tool option added to TFS server administrative console. My TFS product version is 11.0.50727.1 . What is the reason for database backup tool option not being added?
– IT researcher
Jan 28, 2014 at 6:20
Add a comment
|
|
I am using Team foundation server 2012. The TFS is installed in a windows 8 PC. For TFS database I am using SQL server 2008 R2. As a disaster recovery plan I take full backup of tfs_configuration (I think this is the only database used by TFS other than each database for collection) and backup of all the collection database(One database for each collection).I have taken backup of SQL database using SQL backup command. But I don't whether the backup alone will be helpful in case of DR.
Now in case of disaster recovery(hardware and OS crash) I have to shift the TFS to another new PC. I think simply installing all the softwares(like tfs,visual studio,sql server) and restoring the database will not work. As there will be some changes such as computer name etc.
So how can I recover from DR quickly? What should be the plan(including how and what data to backup) and how it can be done?
|
Disaster recovery for TFS 2012
|
Read the file line by line and then just copy the file to your 'backup' folder:
$files = "files.txt";
$lines = file($files);
foreach($lines as $file)
{
$file = trim($file);
copy($dir . '/' . $file, $dir . '/backup/' . $file);
}
But that has nothing to do with FTP, right?
Also, if your files alway habe .bak- in their name, this may be easier:
foreach (glob("*.bk-*") as $filename) {
copy($filename, 'backup/' . $filename);
}
|
I have .txt file named files.txt which contains a list of file names.
files.txt
index.php.bk-2013-12-02
index.php.bk-2013-12-07
index.php.bk-2013-12-10
index.php.bk-2013-12-20
index.php.bk-2013-12-26
function.php.bk-2013-12-20
function.php.bk-2013-12-23
contact.php.bk-2013-12-23
contact.php.bk-2013-12-30
I want to copy these files to the directory backup
NO need to recursive.. just wanna copy as it is.
My httpdocs folder looks like this
files.txt
index.php.bk-2013-12-02
index.php.bk-2013-12-07
index.php.bk-2013-12-10
index.php.bk-2013-12-20
index.php.bk-2013-12-26
function.php.bk-2013-12-20
function.php.bk-2013-12-23
contact.php.bk-2013-12-23
contact.php.bk-2013-12-30
backup
after I execute the script file, the above mentioned .php.bk files must be copied in to the folder backup
How can I do that pals?
any help will be very much appreciated.
Thanks.
|
How to copying FTP files to an specific folders.?
|
1
After midnight, run the following:
old=$(date +"%Y%m%d" -d yesterday)
mv "db_${old}2355.tar.gz" different/directory && rm "db_${old}*.tar.gz"
I connected the move and delete commands with && as a safety precaution. This way yesterday's backups are deleted only if the move of the 2355 backup is successful. If you are short on disk space and less concerned about backup integrity, replace the && with a ; (or a newline).
Separately, if, as per the script in the question, only one file is going into the tar file, then tar is superfluous. You could instead replace those two lines with:
mysqldump -ubackup_user db_to_backup | gzip >~/backup/db_$DATE.gz
Share
Improve this answer
Follow
edited Dec 28, 2013 at 7:41
answered Dec 28, 2013 at 7:28
John1024John1024
112k1414 gold badges144144 silver badges176176 bronze badges
Add a comment
|
|
I have this bash script that creates a backup of a database every 5 minutes that i run with crontab. At the end of the day, i want to delete the ones created and leave the last one created on that day.
Here's the contents of the script:
#! /bin/bash
DATE=$(date +"%Y%m%d%H%M")
mysqldump -ubackup_user db_to_backup > ~/backup/db_$DATE.sql
tar -zcvf ~/backup/db_$DATE.tar.gz ~/backup/db_$DATE.sql
rm ~/backup/db_$DATE.sql
Sample files created:
db_201312272300.tar.gz
db_201312272305.tar.gz
db_201312272310.tar.gz
db_201312272315.tar.gz
db_201312272320.tar.gz
db_201312272325.tar.gz
db_201312272330.tar.gz
db_201312272335.tar.gz
db_201312272340.tar.gz
db_201312272345.tar.gz
db_201312272350.tar.gz
db_201312272355.tar.gz
db_201312280000.tar.gz
db_201312280005.tar.gz
db_201312280010.tar.gz
db_201312280015.tar.gz
it should leave the following at the end of the day:
db_201312280000.tar.gz
db_201312280005.tar.gz
db_201312280010.tar.gz
db_201312280015.tar.gz
And have the following file copied/moved to a different directory:
db_201312272355.tar.gz
|
Delete files with filename pattern using bash script
|
1
Unfortunately you can't use snapshot to bring a log-shipping backup instance online. You might be able to do it if the data resides on a san where you can force a fast lun copy and then mount a second copy of it real quick. Even without a SAN you can basically, between log loads or while you let them stack up for a bit, offline the DB, copy the files, and then bring up the copied version. Ugly but it gets the job done.
If you can get both DBs involved up to 2012 then I'd recommend you read up on AlwaysOn Availability Groups. http://technet.microsoft.com/en-us/library/hh510230.aspx They are cool because you can leave the second copy online in read-only mode while it is mirroring, all the time. Thus the stupid, almost repetitive, name for what should have been called something simple like "Live Mirroring".
Also, questions like this might better be asked on one of the sister sites like http://ServerFault.com or https://dba.stackexchange.com/
Share
Improve this answer
Follow
edited Apr 13, 2017 at 12:42
CommunityBot
111 silver badge
answered Dec 27, 2013 at 15:12
MarkMark
1,06866 silver badges1313 bronze badges
Add a comment
|
|
I've got two SQL Servers, one of these servers (Server A) is backing up transaction logs on some database and uploading them to the other (Server B). Unfortunately I have no access to Server A, I simply have to trust that it is doing its job of periodically uploading its transaction logs to Server B.
Now, suppose Server B needs to recover the database for whatever reason. Doing this will break its ability to receive further transaction log backups.
Is there any way to copy/branch/backup the restoring database, so I can have one version of it that will continue to apply the transaction logs, and one version that will be recovered for reading/writing?
|
Copying restoring databases in SQL Server 2008/2012
|
With proposed changes from my comments incorporated, I suggest this code:
#!/bin/bash
src="/mnt/$sourceboxname/$drive"
dst="/backup/$sourceboxname/$drive"
timestamp="$src/timestamp"
errors=$({ cd "$src" && find -newer "$timestamp" | while read objresults;
do
mkdir -p $(basename "$dst/$objresults")
[[ -d "$objresults" ]] || gzip -fc < "$objresults" > "$dst/$objresults.gz"
done; } 2>&1)
if [[ -z "$errors" ]]
then
touch "$timestamp"
else
echo "$errors" >&2
exit 1
fi
|
I'm working on improving our bash backup script, and would like to move away from rsync and towards using gzip and a "find since last run timestamp" system. I would like to have a mirror of the original tree, except have each destination file gzipped. However, if I pass a destination path to gzip that does not exist, it complains. I created the test below, but I can't believe that this is the most efficient solution. Am I going about this wrong?
Also, I'm not crazy about using while read either, but I can't get the right variable expansion with the alternatives I've tried, such as a for file in 'find' do.
Centos 6.x. Relevant snip below, simplified for focus:
cd /mnt/${sourceboxname}/${drive}/ && eval find . -newer timestamp | while read objresults;
do
if [[ -d "${objresults}" ]]
then
mkdir -p /backup/${sourceboxname}/${drive}${objresults}
else
cat /mnt/${sourceboxname}/${drive}/"${objresults}" | gzip -fc > /backup/${sourceboxname}/${drive}"${objresults}".gz
fi
done
touch timestamp #if no stderr
|
find and gzip a directory recursively without a directory/file test
|
You have to initiate copy ami operation from the destination region. In your copy command/api , you have to specify the source region where your current ami exist.
Read this carefully. I am quoting the relevant info from the link I referred belo....
This command is submitted to and initiated from the destination region endpoint.
|
I'm trying to create an automatic backup of my Amazon AMIs to another region (automatically, of course).
This is possible with the web GUI which Amazon provides, but I couldn't find an API with allows you to do that (I searched both the boto and the rest API).
The only thing that was close or related to this was the boto.*.Image.CopyImage function, but no destination to another region (or even a PasteImage function) exists, so I don't see how it helps me.
Does anyone know how to do this?
|
Backing up my Amazon AMI to an external image
|
It could be that you simply have not waited long enough. From the docs in reference to the call to dataChanged:
This call notifies the backup manager that there is data ready to be
backed up to the cloud. At some point in the future, the backup
manager then calls your backup agent's onBackup() method.
The backup is not initiated right away. It is scheduled for a future time. You can initiate a backup right away by using adb on the command line.
See http://developer.android.com/guide/topics/data/backup.html (Testing your backup agent)
|
I have done the following to get SQLite database Backup/Restore working.
(1)get registration from Google.
[http://developer.android.com/google/backup/signup.html][1]
(2)Add the key to Manifest xml file
<application>
<meta-data android:name="com.google.android.backup.api_key"
android:value="backup_service_key_from_google" />
...
</application>
(3)Add my backagent agent in the XML file
<application
android:allowBackup="true"
android:label="@string/app_name"
android:theme="@style/AppTheme"
...
android:backupAgent="MyBackupAgent" android:restoreAnyVersion="true"
>
(4)Create a class called MyBackupAgent
class MyBackupAgent extends BackupAgentHelper{
@Override
public void onCreate(){
FileBackupHelper dbs = new FileBackupHelper(this, "../databases/"+SQLHelp.dbName);
addHelper(SQLHelp.dbName, dbs);
}
}
(5)Call the backup Manager for backup
BackupManager mBackupManager = new BackupManager(this);
mBackupManager.dataChanged();
After step 5, I do not see anything happening. I put a break point in MyBackupAgent. Nothing stops in the onCreate() function.
Any suggestions?
|
SQLite database backup android
|
1
I've also noticed that it is a bit challenging to save the "OK" output from xtrabackup to the log file, as the Perl script playing with tty. Here is what worked for me.
If you need execute innobackupex from the command line, you can do:
nohup innobackupex --user=root --password=pass --databases="db" --stream=tar ./ | gzip -c -1 > /var/backup/backup.tar.gz 2>/path/mybkp.log
if you need to script it and get an OK message you can do:
/bin/bash -c "innobackupex --user=root --password=pass --stream=tar ./ | gzip -c -1 > /var/backup/backup.tar.gz" 2>/path/mybkp.log
Please note that in the second command, the double quote closes before the 2>
Share
Improve this answer
Follow
edited Sep 2, 2015 at 4:06
answered Sep 1, 2015 at 13:50
Dmitriy RoyzenbergDmitriy Royzenberg
5911 silver badge55 bronze badges
Add a comment
|
|
How is it possible to run this and output the innobackupex output to a file (but still send output to the display)?
innobackupex --user=root --password=pass --databases="db" --stream=tar ./ | gzip -c -1 > /var/backup/backup.tar.gz
I need to ouput the innobackupex log with ... completed OK! in the last line to a file? How can I do that?
|
run innobackupex with gzip and pipe display output to file
|
1
ROBOCOPY f:\myusb c:\myfolder /mir
It will copy from source (the usb) to target (the hd) from/to indicated folders all the new and updated files, ignore and leave all the non changed files and remove from target all the files not present in source.
Share
Improve this answer
Follow
answered Nov 7, 2013 at 17:54
MC NDMC ND
70.2k88 gold badges8787 silver badges127127 bronze badges
1
+1 Robocopy with /mir is very powerful, but beware don't get the target folder wrong. It can wipe enormous quantities of data if you pick the root directory, or the wrong folder/drive.
– foxidrive
Nov 7, 2013 at 19:22
Add a comment
|
|
How would I go about making a batch file that empties a backup folder on my PC and then copies over the data on my USB to the folder. And then each evening my files would be backed up by simple clicking on the file. I don't have a very big grasp on how batch files work. Could somebody point me in the right direction as to what this would look like?
|
Batch File that will Backup My Files?
|
1
This question isn't really a good fit for SO (which is for development and programming questions); however, Apple has a good videoon the basics of Time Machine. It really is as simple as you've described it: Time Machine will take a backup each hour and keep that backup for a day. Then it'll keep one backup from that day for a month, and one backup from each week for as long as possible.
The backups are "deltas", meaning only changed data is backed up, which helps keep backup sizes small and therefore maximise the amount of backups you can keep.
This sitealso has a wealth of hints and tips about Time Machine.
Share
Improve this answer
Follow
answered Nov 6, 2013 at 17:08
KenDKenD
5,32077 gold badges5151 silver badges8888 bronze badges
Add a comment
|
|
I am really sorry for this type of question, but can you please explain to me in a better way the statement below so I can understand better the type of back up that TIME MACHINE does. Thank you
"Time Machine keeps hourly backups for the past 24 hours, daily backups for the past month, and weekly backups until your backup drive is full."
|
Time Machine Back Up
|
You don't allocate space for the file names; you should. You're writing over indeterminate memory. This would probably work better:
void backupf(char *namelist, char *dirname)
{
char in_filename[MAXPATHLEN];
char out_filename[MAXPATHLEN];
char line[MAXPATHLEN];
FILE *filenames = fopen(namelist, "r");
if (filenames == NULL)
{
fprintf(stderr, "Cannot Open File\n");
exit(EXIT_FAILURE);
}
while (fgets(line, sizeof(line), filenames) != NULL)
{
snprintf(in_filename, sizeof(in_filename)"./%s\n", line);
snprintf(out_filename, sizeof(out_filename), "%s/%s\n", dirname, line);
backup(dirname, in_filename, out_filename);
}
fclose(filenames);
}
|
My code compiles just fine but when I run it I get bus error: 10
void backupf(char *namelist, char *dirname)
{
char *in_filename;
char *out_filename;
char line[MAXPATHLEN];
FILE *filenames = fopen(namelist, "r");
if(filenames == NULL)
{
fprintf(stderr, "Cannot Open File\n");
exit(EXIT_FAILURE);
}
while( fgets(line, sizeof line, filenames) != NULL )
{
sprintf(in_filename, "./%s\n", line);
sprintf(out_filename, "%s/%s\n", dirname, line);
}
backup(dirname, in_filename, out_filename);
fclose(filenames);
}
It's supposed to take a text file argument with a list of file names and then use that information to back it up to a backup directory using a backup function I've written.
|
Bus Error: 10 in C
|
Most likely because dump can't be found in the environment in which cron is running,
|
I am using CentOS and trying to perform a backup of my /home directory using crontab.
When I run my dump command in the terminal it works fine but when I try to run it using crontab it does not run.
This is my command: (runs once a week, Friday at 8pm)
0 20 * * 5 dump -0f /Mt/home.bck /home
Why doesn't it run?
When I look at the log file of cron it says:
(root) CMD (dump -0f /Mt/home.bck /home)
This message is printed every time the crotab is supposed to run.
|
command dump is not found in crontab
|
Both answers are incorrect: your and theirs.
You are right about one thing -- SQL Server won't let you even CREATE log backups, on a database set to "simple" recovery model.
So their answer is incorrect, because it says "restore each log backup", when log backups cannot exist.
However, your answer is incorrect, also, because there was ONLY ONE DIFFERENTIAL BACKUP since the full backup, and THAT DIFFERENTIAL BACKUP FAILED.
So... the real answer is:
(1) Attempt to make a backup of the failed database.
This cannot make things any worse, and if it succeeds, might be very useful later. (If it has very important info, you can try restoring it to an alternate environment later, and see if any of that info can be recovered.)
(2) Restore from the latest full backup.
Questions?
|
I am working on below exercise. I would have thought the answer is "Restore the latest full backup. Then, restore the latest differential backup".
However, the answer given is "Restore the latest full backup, and restore the latest differential backup. Then, restore each log backup taken before the time of failure from the most recent differential backup".
I didn't think this is correct as Transaction log backups not taken on Simple mode?
Thanks!
Scenario:
The database uses Simple Recovery model.
Full database backup 01:00 daily.
Differential backup 13:00 daily.
Issue: The differential backup fails. Then database fails at 14:00. How to restore database and ensure minimal data loss?
|
Restore Database - Simple Recovery Model
|
1
I think it is because impersonationLevel has not been set. Try this:
Set objWMIService = GetObject("winmgmts:" _
& "{impersonationLevel=impersonate}!\\" & strSysName & "\root\cimv2")
Set colProcessList = objWMIService.ExecQuery _
("Select * from Win32_Process Where Name = 'Outlook.exe'")
For Each objProcess in colProcessList
objProcess.Terminate()
Next
Share
Improve this answer
Follow
answered Nov 5, 2013 at 11:55
Raghu NandanRaghu Nandan
1111 bronze badge
Add a comment
|
|
I have searched the Stack Overflow site for questions related to closing
Outlook. There were a number of hits but none seem to describe what I'm
trying to do.
The problem I'm trying to solve is how to backup the Outlook data base
automatically and unattended. Outlook needs to be closed (if it is
running) before copying the .pst files.
I found a VBScript at (www.howto-outlook.com/howto/closeoutlookscript.htm)
that seems like what I need. But I can't get it to run when initiated from
the Windows Task Scheduler.
I am running on a Windows 8 Sony laptop.
My VBScript should close Outlook prior to doing a backup of the .pst files.
The code is stored in CloseOutlookVerify.vbs.
Below is the offending code from CloseOutlookVerify.vbs:
Set colProcessList = objWMIService.ExecQuery _
("Select * from Win32_Process Where Name = 'Outlook.exe'")
For Each objProcess in colProcessList
Set objOutlook = CreateObject("Outlook.Application")
' The above line fails with ERR = 70 - Permission denied
objOutlook.Quit
Closed = 1
Next
This script works correctly if I double-click on the .vbs file
from Windows Explorer.
It works correctly if I run it from a DOS Command Prompt window.
It fails with err = 70 when run via the Windows Task Scheduler.
So, what is different about running this script from a command prompt
vs. by the task scheduler? And how can I make it work correctly when run
by the task scheduler?
FYI - I made my living programming in C and Unix shell languages, but this
is my first exposure to VBS in the Windows environment.
Many thanks for any expertise you can provide.
|
How to close Outlook with VBScript when run from Task Scheduler
|
A SQL Server database backup contains the structure and the data, so if you have run a full backup on one SQL Server 2012 server, you can restore this onto another SQL Server 2012 + server instance without having to create an empty database first.
|
I remember myself restoring a db on linux side. I used mysql dump and before I could restore this backup on the other server I had to create a DB with the same name.
Now I am going to switch the server on windows side using SQL 2012. I am backupping many SQL DB's and call them for now db1.bak , db2.bak...
When I now want to restore them on the new server, do I need to create a "structure" first with the same DB names or can I simply restore my DB's with the restore command one by one.
Is there anything else I should prepare? Thanks
|
How to restore a SQL Backup .bak file on a different (new) server? SQL 2012
|
Depends on what you consider "simple". Since it's only a small number of tables, the way I'd do it is like this:
dump individual tables with pg_dump -t table_name --column-inserts
edit the individual files, change the schema definitions to be compatible with mysql (e.g. using auto_increment instead of serial, etc. : like this: http://www.xach.com/aolserver/mysql-to-postgresql.html only in reverse)
load the files into the mysql utility like you would any other mysql script.
If the files are too large for step #2, use the -s and -a arguments to pg_dump to dump the data and the schema separately, then edit only the schema file and load both files in mysql.
|
I have a PostgreSQL database with 4-5 tables (some of those have more than 20 million rows). i have to replicate this entire database onto another machine. However, there I have MySQL (and for some reason cannot install PostgreSQL) on that machine.
The database is static and is not updated or refreshed. No need to sync between the databases once replication is done. So basically, I am trying to backup the data.
There is a utility called pg_dump which will dump the contents onto a file. I can zip and ftp this onto the other server. However, I do not have psql on the other machine to reload this into a database. Is there a possibility that mysql might parse and decode this file into a consistent database?
Postgres is version 9.1.9 and mysql is version 5.5.32-0ubuntu0.12.04.1.
Is there any other simple way to do this without installing any services?
|
Duplicating PostgreSQL database on one server to MySQL database on another server
|
1
The easiest db migration is detach/move file/attach but you must have copied the logins first. You want to have the same uid to have them rebinded automatically. For that, you can use sp_help_revlogin (http://support.microsoft.com/kb/918992). This script will generated a login creation script you must run on the destination Server. Then you can use the Copy Database Wizard or manually detach them, copy the files then attach them at destination.
Share
Improve this answer
Follow
answered Oct 22, 2013 at 3:01
PollusBPollusB
1,75622 gold badges2222 silver badges3232 bronze badges
Add a comment
|
|
I'm moving from 1 Server 2012 box to another Server 2012 box. I'm trying to move my SQL Server 2012 Express instance exactly (databases, logins, ect...) as it is, from 1 server to the other.
What is the easiest way to do this? I have just now realized my .bak backups do not restore the way I thought.
Here is my code to backup the databases.
REM @ECHO OFF
SETLOCAL
REM Get date in format YYYY-MM-DD (assumes the locale is the United States)
FOR /F "tokens=1,2,3,4 delims=/ " %%A IN ('Date /T') DO SET NowDate=%%D-%%B-%%C
REM Build a list of databases to backup
SET DBList=%SystemDrive%SQLDBList.txt
SqlCmd -E -S LOCALHOST\SQLEXPRESS -h-1 -W -Q "SET NoCount ON; SELECT Name FROM master.dbo.sysDatabases WHERE [Name] NOT IN ('tempdb')" > "%DBList%"
REM Backup each database, prepending the date to the filename
FOR /F "tokens=*" %%I IN (%DBList%) DO (
ECHO Backing up database: %%I
SqlCmd -E -S LOCALHOST\SQLEXPRESS -Q "BACKUP DATABASE [%%I] TO Disk='C:\SQLBackup\Database\MSSQL\%NowDate%_%%I.bak'"
ECHO.
)
REM Clean up the temp file
IF EXIST "%DBList%" DEL /F /Q "%DBList%"
ENDLOCAL
How do I restore the data back with this?
|
What is the easiest way to move all data to a new MS SQL Server?
|
Be careful with ctime.
ctime is related to changes made to inodes (changing permissions, owner, etc)
atime when a file was last accessed (check if your file system is using noatime or relatime options, in that case the atime option may not work in the expected way)
mtime when data in a file was last modified.
Depending on what are you trying to do, the mtime option could be your best option.
Besides, you should check the print0 option. From man find:
-print0
True; print the full file name on the standard output, followed by a null character (instead of the newline character that -print uses). This allows file names that contain newlines or
other types of white space to be correctly interpreted by programs that process the find output. This option corresponds to the -0 option of xargs.
I do not know what are you trying to do but this command could be useful for you:
find /var/www -mtime +180 -print0 | xargs -0 tar -czf example.tar.gz
|
Alright so i have a web server running CentOS at work that is hosting a few websites internally only. It's our developpement server and thus has lots [read tons] of old junk websites and whatnot.
I was trying to elaborate a command that would find files that haven't been modified for over 6 months, group them all in a tarball and then delete them. Thus far i have tried many different type of find commands with arguments and whatnot. Our structure looks like such
/var/www/joomla/username/fileshere/temp
/var/www/username/fileshere
So i tried something amongst the lines of :
find /var/www -mtime -900 ! -mtime -180 | xargs tar -cf test4.tar
Only to have a 10MB resulting tar, when the expected result would be over 50 GB's.
I tried using gzip instead, but i ended up zipping MY WHOLE SERVER thus making is unusable, had to transfer the whole filesystem and reinstall a complete new server and lots of shit and trouble and... you get the idea. So i want to find the perfect command that won't blow up our server but will find all FILES and DIRECTORIES that haven't been modified for over 6 months.
|
Script to zip complete file structure depending on file age
|
Do not reinvent the wheel. Take a look at rsnapshot. Unless you want to use this as a learning exercise, I see no reason why you would want to spend the time that has already been spent to solve this problem.
|
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 10 years ago.
Improve this question
I have a small script which is creating a backup every 2 hours. Now I would like to delete the old ones. I know "find" can do this, but I want it more advanced.
I want to keep
all backups form the last 24 hours
4 backups from the last 5 days
1 backup from the last 14 days
everything older than 14 days can be deleted
Could you tell me how to do this via. a shell bash script in debian ?
I couldn´t find anything for this via. google.
Thank You.
|
Debian Shell Bash Script - Delete old backups/directorys [closed]
|
Break this into several distinct steps that you can implement and thoroughly test separately:
Build a list of files to be archived and then deleted, saved to a temp file
Use the list from step 1 to add the files to .tar.gz archives. Give the archive file a name following a specific pattern that won't appear in the files to be archived, and put it in a directory outside the hierarchy of files being archived.
Read back the files from the .tar.gz and compare them (or their hashes) to the original files to ENSURE that you got them all without corruption
Use the list from step 1 to delete the files. Do not use a wildcard for deletion. Put in some guard code to prevent deletion of any file matching the name pattern of the archive .tar.gz file(s) created in step 2.
When testing a script that can do irreversible damage, always code the dangerous command with a leading echo and leave it that way until you are sure everything works. Only then remove the echo.
|
i'm trying to elaborate a command that will find files that haven't been modified in over 6 months and zip them with one command. Afterwards i want to delete all those files and i just archived.
My current command to find the directories with the files is
find /var/www -type d -mtime -400 ! -mtime -180 | xargs ls -l > testd.txt
This gave me all the directories including the files that are older than 6 months
Now i was wondering if there was a way of zipping all the results and deleting them afterwards. Something amongst the line of
find /var/www -type f -mtime -400 ! -mtime -180 | gzip -c archive.gz
If anyone knows the proper syntax to achieve this i'd love to know. Thakns!
Edit, after a few tests this command results in a corrupted file
find /var/www -mtime -900 ! -mtime -180 | xargs tar -cf test4.tar
Any ideas?
|
Zipping and deleting files with certain age
|
The newer segments are probably old segments that have been "recycled" in preparation for future use, but not yet used (and so not needed for recovery)
|
I've noticed a lag of archive_command execution. The command is configured to set a flag on arhived segment:
archive_command = 'rm pg_xlog/*.backuped ; touch %p.backuped'
If I run ls then I see that a lot of segments are not archived:
000000010000098800000029
00000001000009880000002A
00000001000009880000002B
00000001000009880000002C
00000001000009880000002D
00000001000009880000002E
00000001000009880000002F
000000010000098800000030
000000010000098800000031
000000010000098800000032
000000010000098800000032.backuped
000000010000098800000033
000000010000098800000034
000000010000098800000035
000000010000098800000036
000000010000098800000037
000000010000098800000038
000000010000098800000039
00000001000009880000003A
00000001000009880000003B
00000001000009880000003C
00000001000009880000003D
00000001000009880000003E
00000001000009880000003F
000000010000098800000040
000000010000098800000041
000000010000098800000042
000000010000098800000043
Is this correct behaviour? How do I save those last segments to not lose them on server crash?
|
Postgres wal archivation delay
|
You can do it using three filters:
rsync -av --filter="+ /home" \
--filter="+ /home/*" \
--filter="+ /home/*/public_html" \
--filter="+ /home/*/public_html/**" \
--filter="- *" / [email protected]:mirror
It is important to add a "+" filter for all the directories in the tree before the public_html and the "**" to include everything behind public_html.
The only drawback is that all the home directories will be created on destination but just as empty dirs.
|
Alright so my web server has the following file structure
/
/home
/home/username
/home/username/public_html
/home/username/mail
/home/username/etc
...
/home/username2
/home/username2/public_html
...
So im trying to figure out a way of doing a cronjob that does a rsync wich will only synchronise the public_html folder of the 600 accounts i have. I thought of maybee doing an exclusion list with every other subfolder name there is under the accounts directories but i wasn't sure that was the optimal solution
Is there a way of telling rsync to only sync the content of the public_html folders without having to manually type in the 600 accounts?
Thanks
PS.: My current solution was something amongst the lines of :
rsync –vaRu –exclude ‘mail*’ -exclude 'etc*' home [email protected]:home
With this solution if any filenames match the directories name they won't be copied over.
|
Rsync syntax to copy specific subfolders
|
1
Backup paths are always relative to the server. You can backup to UNC (which I do personally, even if it's a local UNC) or, if you're on a sufficiently recent build of SQL 2012, Azure blob storage (http://technet.microsoft.com/en-us/library/jj919148.aspx).
Share
Improve this answer
Follow
answered Sep 29, 2013 at 14:21
Ben ThulBen Thul
31.7k44 gold badges4646 silver badges7171 bronze badges
Add a comment
|
|
I followed the examples from http://msdn.microsoft.com/en-us/magazine/cc163409.aspx
I am creating a utility that creates backup of databases (local or remote).
I was able to create backup of databases located at my local server. But when I do so for databases located at host server I get following error:
System.Data.SqlClient.SqlError: Cannot open backup device 'D:\Brij\Docs\MyDb.bak'. Operating system error 21(The device is not ready.).
Looks like SMO creates the backup file where the server is located, and hence it is not finding the path. Am I correct ? How could I take backup of a database from a host server and get the backup file in my local machine ?
|
Can we use SMO to take backup of database located at hosting server?
|
In eclipse:
DDMS perspective -> select emulator in devices tab -> File Explorer on the right hand side
Download to PC and revert the process on the other emulator.
Pushing via adb:
It seems that the eclipse tools does not support putting folders on to device.
pushing from adb commandline: adb root push com.my.foo /data/data/com.my.foo/
|
Is there a way to back up an Android app folder from an emulator to a computer, then restore it to another emulator?
For example, Android emulator A has an app called Foo, therefore it has the following folder with many sub-folders and numerous files:
/data/data/com.my.foo
I would like to back up this folder, then restore it to emulator B.
Is this doable?
|
How can an Android app folder be backed up and restored on an emulator
|
1
Most likely your WAL takes a long time to fill up. You can adjust the timeout to force it to switch before it's full. This will increase network traffic significantly, but will give you a max time before the log is sent over. You can check the documentation here.
Share
Improve this answer
Follow
answered Sep 26, 2013 at 14:16
Andres OlarteAndres Olarte
4,38033 gold badges2525 silver badges4545 bronze badges
Add a comment
|
|
We have recently implemented high availability for our postgres (9.0.4) DB server, through the methods described as Log-Shipping Standby Servers in the Postgres documentation. Everything seems to be fine and working, the WAL files are shipping and are being ingested by the standby server, but we are experience lagging between the master and slave machines. The lag is of about 2 hours which is not really acceptable.
What could be the reason for this lag? The machine is not running anything else but the postgres server, although it does use slower hard drives compared to the production server. How can I check if disk I/O is causing issues?
If I check what processes are running on the server I see a constant battle between the postgres startup process which is recovering newest WAL files and the pg_standby utility which is ingesting the archived WALs step-by-step. Is it OK that the startup process is running constantly?
ps example:
postgres 1422 0.0 1.0 13061220 131568 ? S Sep20 0:01 /usr/pgsql-9.0/bin/postmaster -p 5433 -D /data/pgsql_5433/data
postgres 1431 0.0 0.0 176928 512 ? Ss Sep20 0:12 postgres: logger process
postgres 1432 70.5 72.0 13068604 8775544 ? Ss Sep20 5744:15 postgres: startup process waiting for 000000010000181F00000016
postgres 1437 0.2 70.4 13068336 8582736 ? Ss Sep20 22:50 postgres: writer process
postgres 32199 0.0 0.0 4064 484 ? S 01:46 0:00 /usr/pgsql-9.0/bin/pg_standby -l -t/data/pgsql_5433/trigger /data/pgsql_5433/psql_wal_import 000000010000181F00000016 pg_xlog/RECOVERYXLOG 000000010000181E00000051
I would appreciate any hint ...
|
Postgres HA - Warm standby server lagging
|
1
An export problem with large tables has been fixed in phpMyAdmin 4.0.6.
Share
Improve this answer
Follow
answered Sep 13, 2013 at 15:34
Marc DelisleMarc Delisle
8,91433 gold badges2929 silver badges2929 bronze badges
3
More than 5 years later, under phpMyAdmin 4.8.3 and WHM/cPanel 76.0 the same exact issue made a come back :P
– that-ben
Feb 19, 2019 at 0:27
1
Maybe related to github.com/phpmyadmin/phpmyadmin/issues/14478 whose fix will appear in version 4.8.6.
– Marc Delisle
Feb 20, 2019 at 1:20
Hope so! We'll see... Thanks :) I +1'ed you for the necromancy trouble ;-)
– that-ben
Feb 21, 2019 at 13:58
Add a comment
|
|
First time I get this problem after backing up over five years. After setting up custom export and hitting 'go' the message is:
the webpage not found
I proceed to 'more' and get this message:
No webpage was found for the web address: http://name.com/cpsess3961873665/3rdparty/phpMyAdmin/export.php
Error code: ERR_FILE_NOT_FOUND
I tried two databases in my database list that behave normally as I am able to add info to my blog. I am backing up a Wordpress blog.
My last backup is 27 July 2013.
|
Attempting database export yields webpage not found error
|
Ok, the next is best solution:
int date = (int) System.currentTimeMillis();
String source = "/system/etc/gps.conf";
String destination = "/system/etc/gps" + date + ".conf";
if(RootTools.remount("/system/etc/", "rw")){
RootTools.copyFile(source, destination, true, true);
}
The problem is that previously I pointed to /etc, but this location is a symlink, the real path is /system/etc. Obviously we can't change mount type of a symlink, so, the previous code that I just have posted, is the good answer.
Thanks.
|
I'm trying to make a copy of a file in the etc folder, for that I use the next code in a button:
changeNTP.setOnClickListener(new View.OnClickListener() {
public void onClick(View v) {
File exists = new File("/etc/gps.conf");
if (exists.exists()) {
// We make a backup first
CommandCapture command = new CommandCapture(0, "cp -f /etc/gps.conf /etc/gps" + System.currentTimeMillis() + ".conf");
try {
RootTools.getShell(true).add(command);
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (TimeoutException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (RootDeniedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
// Last time that file was modified
//Date filedate = new Date(exists.lastModified());
}
}
});
Well, the problem is that it doesn't copy anything. What could be the problem?
Thanks.
|
Why RootTools cp command don't work?
|
Make sure your account (windows or sql server) in SSMS has the right to backup/restore, sysadmin, db_backoperator, etc.
The backup and restore processes runs under the SQL Server (Engine) Service account since you might be running SSMS on your laptop but working with files on the server.
It doesn't matter who you are logged in as, it is the service account that needs access to the directory and files.
Is the service account a domain account or a local service? I use a domain account so that I can work with files on a UNC path.
Also, there are two system stored procedures that get executed during the browse dialog: master.dbo.xp_dirtree, master.dbo.xp_fileexist.
If they return empty results from a query window, it is a permission issue with the SQL Server Service account.
Profiler Trace Browse operation (Adventure Works).
declare @Path nvarchar(255)
declare @Name nvarchar(255)
select @Path = N'C:\mssql\save me\backup\AdventureWorks2012'
select @Name = N'AdventureWorks2012_backup_2012_11_30_160723_2147507.bak'
create table #filetmpfin (Name nvarchar(255) NOT NULL, IsFile bit NULL)
if(@Name is null)
begin
create table #filetmp (Name nvarchar(255) NOT NULL, depth int NOT NULL, IsFile bit NULL )
insert #filetmp EXECUTE master.dbo.xp_dirtree @Path, 1, 1
insert #filetmpfin select Name, IsFile from #filetmp f
drop table #filetmp
end
if(NOT @Name is null)
begin
declare @FullName nvarchar(300)
if(@Path is null)
select @FullName = @Name
else
select @FullName = @Path + '\' + @Name
create table #filetmp2 ( Exist bit NOT NULL, IsDir bit NOT NULL, DirExist bit NULL )
insert #filetmp2 EXECUTE master.dbo.xp_fileexist @FullName
insert #filetmpfin select @Name, 1-IsDir from #filetmp2 where Exist = 1 or IsDir = 1
drop table #filetmp2
end
SELECT
Name AS [Name],
IsFile AS [IsFile]
FROM
#filetmpfin
ORDER BY
[IsFile] ASC,[Name] ASC
drop table #filetmpfin
|
I am trying to restore a database from .bak and trn files. I am not able to see .bak and .trn files through the SQL Server Management Studio. But when I go to the folder I see them. I used T-sql but it says access is denied. I am a sysadmin on the server. Can someone please help me with it.
Script:
RESTORE DATABASE [XYZ]
FROM DISK = N'R:\MSSQL10_50.MSSQLSERVER\MSSQL\Restore\XYZ_Full.bak' WITH FILE = 1
GO
Error: Msg 3201, Level 16, State 2, Line 3
Cannot open backup device 'R:..."Operating system error 5(Access is denied.).
|
SQL Server Management Studio - trouble restoring from bak and trn
|
1
If your original query works like you said it does (I don't know off the top of my head if it works that way or not) you just need to add the additional devices to the Backup object
This should be the equivalent SMO code
var server = new Server(/*...*/);
var backup = new Backup();
backup.Action = BackupActionType.Database;
backup.Database = "AdventureWorks";
backup.Devices.AddDevice(@"C:\Backup\MultiFile\AdventureWorks1.bak",DeviceType.File);
backup.Devices.AddDevice(@"C:\Backup\MultiFile\AdventureWorks2.bak",DeviceType.File);
backup.Devices.AddDevice(@"C:\Backup\MultiFile\AdventureWorks3.bak",DeviceType.File);
backup.SqlBackup(server);
Share
Improve this answer
Follow
answered Aug 21, 2013 at 7:05
Scott ChamberlainScott Chamberlain
126k3535 gold badges286286 silver badges437437 bronze badges
4
Thanks Scott For your quick response. I haven't tried this one. But I think this will work. But Sorry I forgot to write another issue.Let me tell you other case. I want to split my backup files if size of my database exceeds specific limit of 10 GB. So My issue is How Can I get actual size of my backup files? Sorry for inconvenience.
– Ankit Prajapati
Aug 21, 2013 at 7:15
That I do not know, I never have needed to split backup files before. I only knew how to do the direct translation of what you posted in SQL to C# syntax using SMO.
– Scott Chamberlain
Aug 21, 2013 at 7:16
Thanks Scott for your kind consideration. So is there no any other way to get actual size of my backup files before I code for Backup..?
– Ankit Prajapati
Aug 21, 2013 at 7:28
As I said; there may, or may not, be a easy way to do that. I just don't know enough about splitting backups to be able to answer.
– Scott Chamberlain
Aug 21, 2013 at 7:37
Add a comment
|
|
I am working on backing up and restoration of SQL Database. I am new to this.I have an issue related to Backup process. I have SQL database and I am using BACKUP and RESTORE classes to perform backup and restore of database.
I want to make my program more efficient using split backup. I have aprx 15 to 20 GB of data. so I want to split my my backup files in specified limit lets say 8-10 GB. I can do this using below SQL statements:
BACKUP DATABASE AdventureWorks
TO DISK = 'C:\Backup\MultiFile\AdventureWorks1.bak',
DISK = 'C:\Backup\MultiFile\AdventureWorks2.bak',
DISK = 'C:\Backup\MultiFile\AdventureWorks3.bak'
GO
But I want to do this by using Microsoft.SqlServer.Management.Smo.Backup classes. And I want to get the size of my backup files. Because my criteria is to split file if my database size exceeds 10 GB otherwise it will not split my file. SO my issue is how can I get size of my database when taking backup.
|
How do I split large SQL database backup files in .net?
|
Yes, There was option to take back up as a Research in motion after passing PBDT and RDK files while choosing first option in attached image.
That I forgot to do it.
Which is very important to take back up if you want to register your Blackberry devices again with same signing keys.
Also refer this link http://supportforums.blackberry.com/t5/Testing-and-Deployment/Backup-and-Restore-BlackBerry-Code-Signing-Keys/ta-p/837925
|
I have already got PBDT.csj and RDK.csj files after code signing process with this (https://www.blackberry.com/SignedKeys/codesigning.html) link.
But when I am trying to signing registration and configuration I need to do it with second option which is already marked in attached image, but there need to have research in motion registration information (.zip) file backed up.
So anyone please tell me how should I take back up of research in motion registration information (.zip) during code signing process of Blackberry?
Please refer attached image for more details.
Thanks in advance.
|
How should I do back up of research in motion registration information (.zip) during code signing process of Blackberry?
|
Just remove /home/jjd from the exclude file. According to the rsync documentation, a leading slash does not apply to the root of the filesystem, but to the "root of the transfer".
|
I use rsync to backup the home directory (ext4) of my Ubuntu installation. I use the following command to copy files and folders to a remote server (ext4).
$ rsync -rt --delete --delete-excluded --links \
--exclude-from '/home/jjd/rsync-home-exclude.txt' \
/home/jjd/ server:/volume1/backup-home
I defined some folders and files which can be ignored for the backup:
$ cat /home/jjd/rsync-home-exclude.txt
/home/jjd/.thumbnails/
/home/jjd/Downloads/.org.chromium.Chromium*
/home/jjd/.cpan
.cache/
*.swp
*.lock
*.tmp
/home/jjd/.local/share/recently-used.*
.TrueCrypt/.show-request-queue
.dropbox/command_socket
.dropbox/iface_socket
*.sock%
Nevertheless, rsync still reports the following errors:
rsync: opendir "/home/jjd/.cpan/build/local-lib-1.008009-Xl6GGK/inc" failed: Permission denied (13)
rsync: opendir "/home/jjd/.cpan/build/local-lib-1.008009-Xl6GGK/lib" failed: Permission denied (13)
rsync: opendir "/home/jjd/.cpan/build/local-lib-1.008009-Xl6GGK/t" failed: Permission denied (13)
IO error encountered -- skipping file deletion
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1070) [sender=3.0.9]
|
How to exclude .cpan folder from rsync?
|
Change CONSISTENT=Y
to flashback_time=systimestamp.
Remove DIRECT=Y
(you can think of expdp as always using direct path, whenever possible).
Change FILE= parameter
to DUMPFILE= parameter.
So you wouldn't use legacy mode. See if this resolves ORA-922 issue.
|
I am running an export from Oracle 11g:
$ expdp system/ELIDED JOB_NAME=exp_BTM2CATS SCHEMAS=BTM2CATS file=btm2cats-%u.dmp DIRECTORY=DP_DIR filesize=1900M CONSISTENT=Y DIRECT=Y
Export: Release 11.2.0.1.0 - Production on Wed Jul 31 22:44:29 2013
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Legacy Mode Active due to the following parameters:
Legacy Mode Parameter: "consistent=TRUE" Location: Command Line, Replaced with: "flashback_time=TO_TIMESTAMP('2013-07-31 22:44:29', 'YYYY-MM-DD HH24:MI:SS')"
Legacy Mode Parameter: "direct=TRUE" Location: Command Line, ignored.
Legacy Mode Parameter: "file=btm2cats-110.dmp" Location: Command Line, Replaced with: "dumpfile=btm2cats-2.dmp"
Legacy Mode has set reuse_dumpfiles=true parameter.
... and getting an error:
...
ORA-31693: Table data object "BTM2CATS"."APM_PACKAGE_VERSIONS" failed to load/unload and is being skipped due to error:
ORA-00922: missing or invalid option
...
All other ORA-00922 errors I see references to are when invoking "CREATE TABLE" or perhaps a related "ALTER". This error does not seem to be appropriate for occurring in the middle of a properly-invoked expdp invocation. Can anyone explain what this error means in this context and what I might do to try and fix it?
|
Why do I get "ORA-00922: missing or invalid option" while exporting data from an Oracle schema?
|
ECHO "MOVE "\\%pcid%\C$\Program Files\Application\Data\*.xml" > "\\server32\c$\scripts\masterbackup.bat
---- ^ Remove this quote and add an extra waaaaay up at the very end...................................^.here
The syntax is ECHO string > file
Where quotes should be balanced and need to be placed around any (full-)filename that contains spaces (etc.)
Note also that > will write the data to a NEW file, deleting the exiting (if any). Use >> to APPEND to an existing file.
Having said that, all the command would do is put or add a line
MOVE "\\%pcid%\C$\Program Files\Application\Data\*.xml"
to the file "\\server32\c$\scripts\masterbackup.bat"
That doesn't seem to be particularly rational. Shouldn't you be MOVEing the fileset to somewhere and appending that move command to the batch?
|
I am trying to write a batch file that will write a new line with a "MOVE" command to a second batch file. We have a master batch file with a MOVE command for every PC that uses a piece of our software so we can back the records up to a network drive (scheduled to run daily). Data on the local PC's gets deleted after 20 days and we need to create a place to hold these files permanently. Unfortunately this is the best way to keep our data backed up, I'm just trying to automate the process to make the process as easy as I can for my department. I'm trying the command below but I think it's an issue with the quotation marks. Any help would be appreciated, thanks!
:START
ECHO.
SET /p pcid=Please enter the PCID that you would like to setup for Auto-Archiving:
IF "%pcid%"=="%%" (GOTO CONFIRMPC)
IF "%pcid%"=="exit" (GOTO END)
:CONFIRMPC
ECHO.
ECHO Please verify that "%pcid%" is correct...
ECHO.
SET /p verify=Enter y/n...
IF "%verify%"=="y" (GOTO SETUPAUTOARC)
IF "%verify%"=="n" (GOTO START)
IF "%verify%"=="%%" (GOTO VERIFYERROR)
IF "%verify%"=="exit" (GOTO END)
:VERIFYERROR
ECHO.
ECHO Please enter a valid (y/n) response...
(GOTO CONFIRMPC)
:SETUPAUTOARC
ECHO.
ECHO Creating directory...
MKDIR "\\server32\e$\Backup Data\%pcid%"
ECHO.
(HERE IS WHERE I'M RUNNING INTO TROUBLE)
ECHO "MOVE "\\%pcid%\C$\Program Files\Application\Data\*.xml" > "\\server32\c$\scripts\masterbackup.bat
ECHO.
SET /p endresp=Finished! Would you like to run another PCID? (y/n)
IF "%endresp%"=="y" (GOTO START)
IF "%endresp%"=="n" (GOTO END)
:END
exit
|
Append Command From One Batch to another
|
In the end, after much looking into the issue and leaving it, letting it ripen in my mind, I was able to find exactly what I needed by using mysqldump:
exec('mysqldump --user=XXX --password=XXX --host=localhost DBNAME > outputfile.sql');
Hope it might help someone else out there!
P.S. Thanks a lot to @hellosheikh for the answer. It did provide very interesting info that might prove handy later on. However, for my exact needs, this second way in the end proved more practical.
|
I'm using Wamp to, among other things, run a couple of local PHP sites that use local mySQL dbs in order to organize and keep personal info. None of this touches anything outside my PC. It just works on my local virtual server. However, if something happens to my PC, then everything's gone. If this were online, in an external server, then there's auto backup, but as it is private info, security issues make this impossible, besides impractical.
What I would like is to be able to export my SQL db upon hitting the SAVE button on new info I'm inserting into the db. I would like to save the db into, for example, my Dropbox folder. This way, if anything happens to my PC, I have my dbs secure in my Dropbox.
I've found how to backup dbs onto the same server and obviously I know how to do it manually through phpMyAdmin, but I can't find how to this onto a computer, moreover, the same computer where the local server is being run with the dbs.
Can anyone please help?
|
Backing up local SQL database onto the same computer using PHP
|
1
Known problem between Windows 2008 and non-2008 volumes. Hot fix resolves it.
Share
Improve this answer
Follow
answered Jun 28, 2013 at 0:51
Jason FJason F
3111 silver badge55 bronze badges
Add a comment
|
|
We have a backup application that uses the Windows API BackupRead. It works correctly on Windows Server 2003, 2008, 2008 R2. It does not work on Storage Server 2008 R2. It always fails with error 50 - The request is not supported. The documentation for BackupRead gives no indication that it will not work with Storage Server 2008 R2.
Anyone else have any experience using this API on Storage Server 2008 R2? Did you need to make any changes to your use of the API in order for it to work?
|
Windows API BackupRead failing with error 50 on Windows Storage Server 2008 R2
|
1
Just use the command copy must easy.
take a look:
for /F %%a in (computerslist.txt) do (
copy \\%%a\c$\users\administrator\desktop\%%a\*.txt c:\mycollecteddata\%%a
)
that will copy all files *.txt for all computers that are on computereslist.txt; the copy will be with the current credentials. Save the code on a file *.cmd and execute with the right user, you can create a scheduled taks to start with a user thant is commom for all computers.
Good work.
Share
Improve this answer
Follow
answered Jun 25, 2013 at 22:53
MineScriptMineScript
30111 silver badge44 bronze badges
1
That sounds like a solid idea. I am going to try your suggestion.
– user2521943
Jun 25, 2013 at 23:14
Add a comment
|
|
I am trying to gather files/folders from multiple computers in my network into one centralized folder in the command console (this is the name of the pseudo server for this set of computers)
Basically, what i need is to collect a certain file from all the computers connected to my network and back it up in the console.
Example:
* data.txt // this is the file that i need to back up and its located in all the computers in the same location
* \console\users\administrator\desktop\backup\%computername% // i need each computer to create a folder with its computer name into the command console's desktop so i can keep track of which files belongs to which computer
I was trying to use psexec to do this using the following code:
psexec @cart.txt -u administrator -p <password> cmd /c (^net use \\console /USER:administrator <password> ^& mkdir \\console\users\Administrator\Desktop\backup\%computername% ^& copy c:\data.txt \\console\USERS\Administrator\DESKTOP\backup\%computername%\)
any other suggestions since im having trouble with this command
|
Gathering Files from multiple computers into one
|
robocopy \path\to\source \path\to\dest /XO /E /Y
or something like
fastcopy.exe /cmd=diff /speed=full /force_start /no_confirm_del /auto_close "\path\to\source" /to="\path\to\dest"
|
I am looking for a batch script that can detect last modified/accessed/created files in a day
and can copy them to a specified location on external drive.
Also if it can automatically execute just before shutdown
that will also help a lot! Thank You !!
|
Backup Computer Using Batch Script to external HDD
|
If MySQL can start : follow this post to connect to mysql in console mode :
http://ja.meswilson.com/blog/2007/04/07/access-mysql-command-line-in-xampp/
And then backup it
Edit : Other link : How can I access the MySQL command line with XAMPP for Windows?
|
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
i can't startup my windows (maybe virus) and i can't start xampp on it
i can just startup using safe mode
how can i start xampp in safe mode?
i need my databases backup
how can i backup my databases in safe mode?
i search in google but i couldn't find any thing
when i start xampp in safe mode i see this error
ERROR: Apache Service not started [-1]
|
how to start xampp in windows safe mode [closed]
|
1
One method is simply to save your journal receivers. You can change receivers first, then save the detached recievers.
Share
Improve this answer
Follow
answered Jun 21, 2013 at 19:28
WarrenTWarrenT
4,5122020 silver badges2727 bronze badges
Add a comment
|
|
Is there a way to do incremental backup on DB2 for i? I want to do something like this http://publib.boulder.ibm.com/infocenter/db2luw/v8/index.jsp?topic=/com.ibm.db2.udb.doc/admin/c0006069.htm
|
Incremental backup db2 for i
|
I solve that problem .i found bug for NSUserdefault this NSUserdefault stored data store in preference in plist file so that NSUserdefault data remove and solve problem.
|
i use this method for do not backup and that output is always success . but backup data also come in backup in ipad please help me.
-(BOOL)addSkipBackupAttributeToItemAtURL:(NSURL *)URL
{
const char* filePath = [[URL path] fileSystemRepresentation];
const char* attrName = "com.apple.MobileBackup";
if (&NSURLIsExcludedFromBackupKey == nil) {
// iOS 5.0.1 and lower
u_int8_t attrValue = 1;
int result = setxattr(filePath, attrName, &attrValue, sizeof(attrValue), 0, 0);
return result == 0;
}
else
{
// First try and remove the extended attribute if it is present
int result = getxattr(filePath, attrName, NULL, sizeof(u_int8_t), 0, 0);
if (result != -1) {
// The attribute exists, we need to remove it
int removeResult = removexattr(filePath, attrName, 0);
if (removeResult == 0) {
NSLog(@"Removed extended attribute on file %@", URL);
}
}
// Set the new key
NSError *error = nil;
[URL setResourceValue:[NSNumber numberWithBool:YES] forKey:NSURLIsExcludedFromBackupKey error:&error];
return error == nil;
}
}
above method i use .please help me anybody.thanks
|
'skip-backup' attribute always returns backup instead of skip process
|
Instead of checking modification , you can use a simple trick as described by trojanfoe.
When ever you are modifying i mean adding/ removing/ editing any record in database , set a BOOL flag = YES and store in NSUSERDEFAULTS , after creating the backup, set the flag to NO.
|
Here am creating backup of my database. But i don want to jus keep creating backups. i need to check the NSModificationDate property of the most recently created backup database and have to create new backup only if the database is modified. Can anyone help me on this.
-(IBAction)createdb:(id)sender
{
DatabaseList = [[NSMutableArray alloc]init];
NSDate *currentDateTime = [NSDate date];
NSDateFormatter *dateFormatter = [[NSDateFormatter alloc] init];
[dateFormatter setDateFormat:@"MMddyyyyHHmmss"];
NSString *dateInStringFormated = [dateFormatter stringFromDate:currentDateTime];
dbNameString = [NSString stringWithFormat:@"UW_%@.db",dateInStringFormated];
NSArray *searchPaths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentFolderPath = [searchPaths objectAtIndex: 0];
NSString *dbName = @"UnitWiseDB.db";
NSString *dbPath1 = [documentFolderPath stringByAppendingPathComponent:dbNameString];
NSString *backupDbPath = [documentFolderPath stringByAppendingPathComponent:dbName];
NSError *error = nil;
NSDictionary *fileAttributes = [[NSFileManager defaultManager] attributesOfItemAtPath:backupDbPath error:&error];
NSLog(@"Persistent store size: %@ bytes", [fileAttributes objectForKey:NSFileSize]);
NSLog(@"Modification Date: %@ ",[fileAttributes objectForKey:NSFileModificationDate]);
if ( ![[NSFileManager defaultManager] fileExistsAtPath:dbPath1])
{
[[NSFileManager defaultManager] copyItemAtPath:backupDbPath toPath:dbPath1 error:nil];
}
NSLog(@"DBPath.......%@",dbPath1);
NSFileManager *manager = [NSFileManager defaultManager];
NSArray *fileList = [manager contentsOfDirectoryAtPath:documentFolderPath error:nil];
for (NSString *s in fileList)
{
NSLog(@"Backup.....%@", s);
[DatabaseList addObject:s];
}
[ListViewTableView reloadData];
}
|
Compare NSFileModificationDate of previous Backup
|
Use this method. Create a new copy of your database and save with a different name.
- (void)copyDatabaseToCache
{
NSArray *searchPaths = NSSearchPathForDirectoriesInDomains(NSCachesDirectory, NSUserDomainMask, YES);
NSString *documentFolderPath = [searchPaths objectAtIndex: 0];
NSString *dbPath1 = [documentFolderPath stringByAppendingPathComponent:@"newDatabaseName.sqlite"];
NSString *backupDbPath = @"You should give back up db path here";
NSError *error = nil;
NSDictionary *fileAttributes = [[NSFileManager defaultManager] attributesOfItemAtPath:backupDbPath error:&error];
NSLog(@"Persistent store size: %@ bytes", [fileAttributes objectForKey:NSFileSize]);
if ( ![[NSFileManager defaultManager] fileExistsAtPath:dbPath1]) {
[[NSFileManager defaultManager] copyItemAtPath:backupDbPath toPath:dbPath1 error:nil];
}
}
|
i need to backup my database. Initially in my "Create Backup" page, i have my original database shown. When i click the add new backup button, a new backup of my database has to be created on checking a condition that whether any new changes have been made. If any changes have been made, new backup has to created. Otherwise just an alert msg can be shown that no changes from the last backup file. Can anyone help on this
|
Doing Backup of database
|
1
The first time you reach this line;
$return.= 'DROP TABLE '.$table.';';
...$return has no value to append to. To get rid of the warning, you'll need to initialize $return (to an empty string probably) before starting the loop.
Share
Improve this answer
Follow
answered Jun 9, 2013 at 9:06
Joachim IsakssonJoachim Isaksson
179k2626 gold badges288288 silver badges300300 bronze badges
2
when I set $return='' before that line, the database doesn't completely backup. it's just have one table in the .sql file.
– user2467703
Jun 9, 2013 at 9:11
@user2467703 That's why I said before starting the loop (aka before the foreach line) :) If you put it inside the foreach, it will clear it out every iteration instead of just once at the start of the program as intended.
– Joachim Isaksson
Jun 9, 2013 at 9:14
Add a comment
|
|
I tried this following code to backup mysql database into file and it works perfectly create the .sql file I need.
function backup_tables()
{
$host='localhost';
$user='root';
$pass='';
$name='evote';
$tables = '*';
$link = mysql_connect($host,$user,$pass);
mysql_select_db($name,$link);
//get all of the tables
if($tables == '*')
{
$tables = array();
$result = mysql_query('SHOW TABLES');
while($row = mysql_fetch_row($result))
{
$tables[] = $row[0];
}
}
else
{
$tables = is_array($tables) ? $tables : explode(',',$tables);
}
//cycle through
foreach($tables as $table)
{
$result = mysql_query('SELECT * FROM '.$table);
$num_fields = mysql_num_fields($result);
$return.= 'DROP TABLE '.$table.';';
$row2 = mysql_fetch_row(mysql_query('SHOW CREATE TABLE '.$table));
$return.= "\n\n".$row2[1].";\n\n";
for ($i = 0; $i < $num_fields; $i++)
{
while($row = mysql_fetch_row($result))
{
$return.= 'INSERT INTO '.$table.' VALUES(';
for($j=0; $j<$num_fields; $j++)
{
$row[$j] = addslashes($row[$j]);
$row[$j] = str_replace("\n","\\n",$row[$j]);
if (isset($row[$j])) { $return.= '"'.$row[$j].'"' ; } else { $return.= '""'; }
if ($j<($num_fields-1)) { $return.= ','; }
}
$return.= ");\n";
}
}
$return.="\n\n\n";
}
//save file
$sql_name='db-backup.sql';
$handle = fopen($sql_name,'w+');
fwrite($handle,$return);
fclose($handle);
return $sql_name;
}
but it shows the error code :
A PHP Error was encountered
Severity: Notice
Message: Undefined variable: return
Filename: models/vote_m.php
can somebody tell me how to fix this?
|
Error backup sql database but succeed creating database dump file
|
In Windows Phone 8, backup and restore settings are controlled by the user through system settings. An app cannot prevent itself from being backed up. However, note that the backup does not store any data associated with third party apps but rather only stores a list of installed apps
So basically you don't need to do anything in your app to prevent local files from being stored on SkyDrive if the user has enabled backup.
In Windows 8 everything can be backed up since an admin user will have full access to his computer files, I don't think you can restrict this. If you have sensitive data you can use DataProtectionProvider to protect it.
|
Protecting user files with File History talks about File History, which is basically a continuous backup for Windows 8. The blog discusses File History in depth, and also discusses how to integrate SkyDrive.
I want to programmatically disable backup of certain files. The files live on another server, and there's no need to back them up locally or put them on someone else's cloud. The blog and related articles doe not talk about opt'ing out of the service for application data.
How does one programmatically: (1) disable local file backups; and (2) disable cloud based backups. I'm interested in settings for both Windows 8 (desktop or laptop) and Windows Phone 8.
Related: Both Android and Apple have similar. For Android, we add android:allowBackup and set it to false in AndroidManifest.xml. For Apple, we can use kCFURLIsExcludedFromBackupKey file property or com.apple.MobileBackup extended attribute.
Jeff
|
Windows: Avoid or Disable Backups on Files
|
try with below code
static void BackupDataBase(string databaseName, string destinationPath)
{
try
{
Server myServer = GetServer();
Backup backup = new Backup();
backup.Action = BackupActionType.Database;
backup.Database = databaseName;
destinationPath = System.IO.Path.Combine(destinationPath, databaseName + ".bak");
backup.Devices.Add(new BackupDeviceItem(destinationPath, DeviceType.File));
backup.Initialize = true;
backup.Checksum = true;
backup.ContinueAfterError = true;
backup.Incremental = false;
backup.LogTruncation = BackupTruncateLogType.Truncate;
backup.SqlBackup(myServer);
}
catch (Exception ex)
{
Console.WriteLine(ex.ToString());
}
}
private static Server GetServer()
{
ServerConnection conn = new ServerConnection("server", "username", "pw");
Server myServer = new Server(conn);
return myServer;
}
refere this codeproject article for more information.
|
Using C# and SMO, when I create backups they are being copied to the default backup location used by SQL Server (C:\Program Files\Microsoft SQL Server\MSSQL11.SQLEXPRESS\MSSQL\Backup), instead of the physical location that I specify in code:
Database database = Server.Databases[dbName]);
Backup backup = new Backup();
device = new BackupDevice();
device.Parent = Server;
device.Name = dbName + ".bak";
device.BackupDeviceType = BackupDeviceType.Disk;
device.PhysicalLocation = Path.Combine(filePath + device.Name); // doesn't appear to do anything
device.Create();
backup.Action = BackupActionType.Database;
backup.Database = database.Name;
backup.Devices.AddDevice(filePath, DeviceType.File);
backup.SqlBackup(server);
When I run my code, I find that the path that I specified ("C:\backupTest") is empty and the backup has been added to the default backup location.
Anyone know why this is?
|
BackupDevice.PhysicalLocation does not add to specified location
|
Here's a lot of information about making backups of a Derby database: http://db.apache.org/derby/docs/10.9/adminguide/cadminhubbkup98797.html
Choose a backup method that works well for you, then use your operating system's scheduling tools (cron, etc.) to arrange for that backup to be performed regularly.
|
Apache database hosted in a virtual server to be used with a JSF and JPA application.It there any method where regular back ups can be performed, for example once a day? Like an script?
|
Regular Backup Script for Apache Derby
|
1
The file names will differ each time ?
This would be hard for any type of syncing to work.
What you could do is :
create a new folder outside of where it is found, then :
Before you start remove the last sym linked file in that folder
When the file is found i.e. ls -tAF | grep '/$' | head -1 ....
symlink it this folder
then rsync,ssh,unison file across to new node.
If the symlink name is file-latest.zip then it will always be this
one file sent across.
But why do all that when you can just scp and you can take a look at here:
https://github.com/vahidhedayati/definedscp
for a more long winded approach, and not for this situation but it uses the real file date/time stamp then converts to seconds... It might be useful if you wish to do the stat in a different way
Using stat to work out file, work out latest file then simply scp it across, here is something to get you started:
One liner:
scp $(find /path/to/parent_folder -name \*.zip -exec stat -t {} \;|awk '{print $1" "$13}'|sort -k2nr|head -n1|awk '{print $1}') remote_server:/path/to/name.zip
More long winded way, maybe of use to understand what above is doing:
#!/bin/bash
FOUND_ARRAY=()
cd parent_folder;
for file in $(find . -name \*.zip); do
ptime=$(stat -t $file|awk '{print $13}');
FOUND_ARRAY+=($file" "$ptime)
done
IFS=$'\n'
FOUND_FILE=$(echo "${FOUND_ARRAY[*]}" | sort -k2nr | head -n1|awk '{print $1}');
scp $FOUND_FILE remote_host:/backup/new_name.zip
Share
Improve this answer
Follow
edited May 23, 2013 at 20:47
answered May 23, 2013 at 17:31
V HV H
8,48722 gold badges2828 silver badges4848 bronze badges
Add a comment
|
|
I'm trying to backup just one file that is generated by other application in dynamic named folders.
for example:
parent_folder/
back_01 -> file_blabla.zip (timestam 2013.05.12)
back_02 -> file_blabla01.zip (timestam 2013.05.14)
back_03 -> file_blabla02.zip (timestam 2013.05.22)
and I need to get the latest generated zip, just that one it doesnt matter the name of the file as long as is the latest, is a zip and is inside "parent_folder" get that one.
as well when I do the rsync the folder structure + file name is generated and I want to omit that I want to backup that file in a folder and with a name so I know where is the latest and it will be always named the same.
now im doing this with a perl that get the latest generated folder with
"ls -tAF | grep '/$' | head -1"
and perform the rsync but it does brings the last zip but with the folder structure that I dont want because it doesnt override my latest zip file.
rsync -rvtW --prune-empty-dirs --delay-updates --no-implied-dirs --modify-window=1 --include='*.zip' --exclude='*.*' --progress /source/ /myBackup/
as well it would be great if I could do the rsync without needing to use perl or any other script.
thanks
|
rsync to backup one file generated in dynamic folders
|
You can easily back up your work using Git itself.
I might propose three ways to do that:
Periodically back up your repository to a pen drive:
Plug a flash drive to your PC;
git init --bare a repository on it;
Add it as a named remote in your main repository:
git remote add --mirror=push pendrive /media/that_drive_id
That --mirror=push command-line option will ensure that a simple call to git push pendrive will suffice to push everything pushable.
Back up your repository there using something like
git push pendrive
Unmount the drive.
The next time you will be about to back up, plug the drive then do
git push pendrive
Another option is to use the git-bundle command which might be used to export the whole repository (with history) to a single file which can then be copied off to an external storage.
This approach looks superficially simpler than the former but its simplicity comes at a cost:
No incremental backups: while git-bundle can be used to export only specific parts of the history, you must keep what was backed up the last time somewhere, and this is obviously inconvenient and error prone.
If the repository grows big, each "bundling" will create a big file.
Unless your stuff is really private (like passwords) buy private hosting plan from a Git provider and mirror your repository there.
Mirroring is set up exactly in the same way as for the pendrive approach, just the Git URL will obviously be different.
I personally like the pendrive approach best.
|
I'm using git to keep track of writing projects and other personal work on my own PC (running a version of Ubuntu).
Although my work is being version controlled, I am worried I might one day lose the whole folder (containing its .git file and the work itself) to mishap or technical failure.
What is the best way to protect or back up the work? (Aside from copying the whole folder to another drive.)
|
How to protect folder containing a git history
|
One option might be to use the builtin logical or (from find man page):
expr1 -o expr2
Or; expr2 is not evaluated if expr1 is true.
So in your case you can do:
find "$directory" -name '*.c' -o -name '*.sh'
|
I have this line here find "$directory" -name "*.sh" -print0 | xargs -0 cp -t ~/bckup that backs up everything ending in .sh. How would I do that for multiple file extensions? I've tried different combinations of quotes, parentheses and $. None of them worked =\
I would also like to back up certain file extensions into different folders and I'm not sure how to search a file name for a specific extension.
Here is my whole code just in case:
#!/bin/bash
collect()
{
find "$directory" -name "*.(sh|c)" -print0 | xargs -0 cp -t ~/bckup #xargs handles files names with spaces. Also gives error of "cp: will not overwrite just-created" even if file didn't exist previously
}
echo "Starting log"
timelimit=10
echo "Please enter the directory that you would like to collect.
If no input in 10 secs, default of /home will be selected"
read -t $timelimit directory
if [ ! -z "$directory" ] #if directory doesn't have a length of 0
then
echo -e "\nYou want to copy $directory." #-e is so the \n will work and it won't show up as part of the string
else
directory=/home/
echo "Time's up. Backup will be in $directory"
fi
if [ ! -d ~/bckup ]
then
echo "Directory does not exist, creating now"
mkdir ~/bckup
fi
collect
echo "Finished collecting"
exit 0
|
Bash backup specified file extensions in multiple directories
|
I figured how to do it and correct me if there's a better way but here's how I did it:
I added a the --where= option and specified a condition where a column, say, user_id must be between 1 and 100 for example.
mysqldump --opt -h somedomain.com -u dbuser -p dbpass db1 table1 --where="user_id >= 1 AND user_id <= 100" > ./my-backup-file.sql
and that did exactly what I wanted to it to do.
In hindsight I think I could've used this condition instead: --where="user_id BETWEEN 1 AND 100", which, reads better.
|
So I have a table with nearly 200,000 records that I want to clean up. However I want to back it up. I've tried using the phpMyAdmin interface but the script keeps timing out given the huge size of the database. I've event tried backing up 5000 at a time and it doesn't work.
I'm wondering if I'll have a better time doing this from the command line using the mysqldump command. However, I'm having a hard time coming up with the command to:
back up a particular database (say db1)
back up a particular table in db1 (say table1)
back up a few records at a time (say 2500)
I know that the issue is not a connectivity issue because I can connect to it.
Here's what I have so far:
mysqldump --opt -h somedomain.com -u dbuser -p dbpass db1 table1 > ./my-backup-file.sql
Any help would be greatly appreciated.
UPDATE:
I know what the problem is. A sign-up form was left open for anyone to sign up and a spam bot found it and hammered it with sign-up requests thereby creating something like ~ 199,980 new records in the database. So I know they're standard varchar and text data being inserted. What i want to know is the easiest, pain, free way to clean it up.
|
Is it possible to backup records from a particular MySQL table a few at time with mysqldump? It's a table with a lot of records
|
1
No cancellation of backup process will not affect next backup.
Share
Improve this answer
Follow
answered May 4, 2013 at 6:29
raviOcsraviOcs
1122 bronze badges
Add a comment
|
|
I needed to run a backup of a very large database (which has a scheduled transaction log backup) but it was running too long and i killed the process (which took a long time to rollback) -- between then and my next backup, will the cancellation cause any data loss?
In case it doesn't seem clear, here's a rephrasing:
I killed a backup process - and I will start another one tomorrow - in any way, is the cancellation bad for the next backup?
|
Is it okay to cancel a T-SQL database backup script?
|
1
It's Impossible unless you change their filegroup and create a filegroup for them, read this Backup Overview (SQL Server) for more details.
Share
Improve this answer
Follow
answered Apr 21, 2013 at 11:48
Maryam ArshiMaryam Arshi
2,00422 gold badges2020 silver badges3434 bronze badges
Add a comment
|
|
I have a database that has a schema named lamb that has a number of database objects. I want to back up only the objects and data that are associated with the lamb schema and not the rest.
Is this possible ??? If so how ???
|
Backing up my schema and database objects using SQL Server 2008
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.