Response
stringlengths 8
2k
| Instruction
stringlengths 18
2k
| Prompt
stringlengths 14
160
|
---|---|---|
It is most likely a spacing error or something small, the way I debug commands is to make sure to prints out. OS.system is a great alternative thats easier although subprocess is better. I am not around my computer to test it but you can either set your subprocess like that, or use this example. This is assuming your on Linux or Mac.
import os
cmd = ('rsync -ave --delete root' +str(src_host) + str(src_directory) + '' + str(dst_dir)) #variable you can call anytime
os.system(cmd) # actually performs the command
print x # how to test and make sure
|
I try to setup a cron job to rsync remote files (contains root-level files) into my local server, if I run the command in shell, it works. But if I run this in Python, I got into strange command not found error:
This works if run it in a shell:
rsync -ave ssh --rsync-path='sudo rsync' --delete [email protected]:/tmp/test2 ./test
But this Python script doesn't:
#!/usr/bin/python
from subprocess import call
....
for src_dir in backup_list:
call(["rsync", "-ave", "ssh", "--rsync-path='sudo rsync'", "--delete", src_host+src_dir, dst_dir])
It fails with:
local server:$ backup.py
bash: sudo rsync: command not found
rsync: connection unexpectedly closed (0 bytes received so far) [Receiver]
rsync error: remote command not found (code 127) at io.c(226) [Receiver=3.1.0]
...
|
Python rsync error in reading remote root-level files
|
1
Go to your SDKManager there you can see the path to SDK. Go to that path and backup the entire folder. Make sure you don't miss anything else SDK shows as broken.
Share
Improve this answer
Follow
answered Sep 19, 2015 at 6:18
Kavin PrabhuKavin Prabhu
2,32722 gold badges1717 silver badges3737 bronze badges
Add a comment
|
|
last night my windows 8 crashed while it had android studio opened. Today the android studio acting funny specially when it comes to debugging. emulator and debugger does not seem to communicate well.
i'm thinking about reinstalling the android studio. But i don't want to download the whole SDK packs as they are large. So before i reinstall Android studio i want to backup the current SDKs installed so i can reinstall them using the backup.
Is there a way to backup the SDK and how to reinstall them?
i'm using Android Studio 1.3.2
thanks
|
How to backup SDK before android studio crashes completely?
|
1
Most of the time CRC error (cyclic redundancy check) is connected to file corruption (for any reason), it can be hard disk error (file written on bad sector of hard disk), it is possible to be something else as well.
What you can do is to take database offline for a second (Tasks>Take offline) than copy database and log files .mdf and .ldf to alternate location.
Than try to repair database, if there is some data lost, you can always restore copied files and database to previous state.
If your hard drive is damaged there is possibility that you will not be able to copy files.
Share
Improve this answer
Follow
edited Aug 7, 2015 at 14:34
answered Aug 7, 2015 at 14:22
Forum999Forum999
14011 silver badge1414 bronze badges
Add a comment
|
|
I've an end user who came across this error and was asking for help. Not able to take full backups, below is the error we are getting.
I tried running full backups to continue after errors but it still fails. Yes sql services has full access to the disk. I was able to take full backups for other databases on same server.
Msg 3203, Level 16, State 1, Line 1
Read on "R:\MSSQL10\Database.mdf" failed: 23(Data error (cyclic redundancy check).)
Msg 3013, Level 16, State 1, Line 1
BACKUP DATABASE is terminating abnormally.
I ran checkdb it came back saying to run repair_allow_data_loss. Is there anyway to fix this error without running allow data loss?
|
SQL 2008 backups failing
|
1
you can use utils like zktreeutil it's quite old but it save the whole tree and you can restore it later.
have a look here :
http://manpages.ubuntu.com/manpages/precise/man1/zktreeutil.1.html
Share
Improve this answer
Follow
answered Nov 18, 2015 at 20:29
A.ElnaggarA.Elnaggar
24011 gold badge22 silver badges1313 bronze badges
1
While this may answer the question, it would be preferable to include the essential parts of the answer here, and provide the link for reference.
– IKavanagh
Nov 18, 2015 at 20:53
Add a comment
|
|
Is there a stable solution to backup and restore zookeeper cluster without exhibitor by manually ?
There is no any documentation or any sample for this issue.
|
How to do backup restore zookeeper ensemble without exhibitor?
|
If you can sustain some data loss and some downtime, then yes, log shipping is a good solution. In layman's term, it's basically database backup + scheduled log backups shipped/applied to a remote server.
Now, if there are dependencies between the three databases, then you most likely will be out of sync if you ever have to revert to the DR server.
|
I am trying to set up some sort of fast recovery solution for a non-for-profit volunteer project.
They have two offices across the street, and the one where their servers are located is suffering power outages, a recent one was three days long and triggered the quest for some sort of solution where we could have one server in each building and when an extended outage happens it could replace the primary one.
They are running SQL Server 2008 R2 Express, which I know doesn't include mirroring or publishing features, so I am looking for some way to come close to that.
We can afford some downtime, some manual intervention and even some data loss (meaning having to re-enter the last hour of data or so).
I've spent several hours researching, and it seems some hacky Log Shipping is the closest thing I could get, even when it would be non-supported method.
But many details remain unclear to me:
Using Log shipping strikes me as something similar to rebuilding an Exchange server from logs... Is that so?
Wouldn't it be possible to dump a full backup over night and then have incremental backups that could be used to "rebuild" the databases to a "recent" state in the backup server?
Would this be so unreliable that you would consider it not worth doing?
I am not asking for the specifics here (not yet at least) but for a pointer of whether or not I am looking at the right direction.
Among other sources these seem to be the most promising ones:
http://blog.willbeattie.net/2009/07/log-shipping-in-sql-server-express-2008.html
http://itknowledgeexchange.techtarget.com/sql-server/log-shipping-without-sql-server-enterprise-edition/
PS: A SQL standard licence is out of the budget (in developing countries their cost is quite high and the org is still battling state bureaucracy to get its legal non-for-profit status, so no discounts there.)
Thanks in advance.
MadOp
Edit: forgot to say that there are 3 DB involved, one of 500 MB, the other two are around 1.5 GB
|
SQL Server Express mirror like setup
|
Found the following here.
For deleting folders, try this:FORFILES -p "" /D -15 /C "cmd /c IF @isdir == TRUE rd /S /Q @path" , /D is for number of days, you can play with command parameters to meet exact requirement
You can also use environment variables too so you can easily only delete files on the user that is currently logged on. For example, you can use %HOMEPATH%\Desktop to get to the desktop of the current user. More environment variables here.
|
I have a bat script that I run every day
1) Creates a folder with todays date
2) Copies some files to the new folder
I want to make the script also delete the folder and all files in it if the folder is older than 30 days.
@echo off
echo
echo ------------------------------------------------------------------
echo Daily script that backs ups important files
echo ------------------------------------------------------------------
echo ------------------------------------------------------------------
echo Calcualtion of date
for /f "delims=" %%a in ('wmic OS Get localdatetime ^| find "."') do set "dt=%%a"
set "YY=%dt:~2,2%"
set "YYYY=%dt:~0,4%"
set "MM=%dt:~4,2%"
set "DD=%dt:~6,2%"
set "HH=%dt:~8,2%"
set "Min=%dt:~10,2%"
set "Sec=%dt:~12,2%"
set datestamp=%YYYY%%MM%%DD%
set timestamp=%HH%%Min%%Sec%
set fullstamp=%YYYY%-%MM%-%DD%_%HH%-%Min%-%Sec%
echo ------------------------------------------------------------------
echo Make new Backup folder
md G:\IMS-%fullstamp%
md G:\Backup\IMS-%fullstamp%\Services
echo ------------------------------------------------------------------
echo Copy files into backup folder
xcopy /s /y C:\Services G:\IMS-%fullstamp%\Services
echo ------------------------------------------------------------------
echo Delete old backup folders if older than 30 days
|
Delete folder and files if the folder date is 30 days old
|
1
You are making it too complicated. You should find directories that are old enough and simply tar zip those.
find /opt/backup/ -mtime +"2" -type d -exec tar cvfz backup.tar.gz {} \;
This will look for all directories (-type d) and execute a certain command on them (tar cvfz backup.tar.gz {}). In which {} is a placeholder for the directory found.
If you want to preserve the name of the dir, simply use {} a second time:
find /opt/backup/ -mtime +"2" -type d -exec tar cvfz {}.tar.gz {} \;
Note that no quotes are required around {} as special chars will be handled well inside find's exec.
Share
Improve this answer
Follow
edited Jul 2, 2015 at 14:23
answered Jul 2, 2015 at 13:41
ShellFishShellFish
4,43111 gold badge2222 silver badges3333 bronze badges
Add a comment
|
|
I have a folder in /opt/backup in which folders are created every day. In order to save space I would like to gunzip all folders that are older than 2 days.
I don't want to create one single zip file but rather zip each folder on its own, with the name preserved. I have tried:
#!/bin/bash
# Backup files
files=($(find /opt/backup/ -mtime +"2"))
for files in ${files[*]}
do
echo $files
tar cvfz backup.tar.gz $files
done
But all this does is creating a single zip file, I would like each folder separately.
The script will run every 2 days at 02:00 in the morning. How do I write this script, please?
|
Script to backup folders
|
Found a solution for my current problem:
Connect to the env and then run following command:
wget https://s3-us-west-2.amazonaws.com/ps-tools/riak-data-migrator-0.2.9-bin.tar.gz
tar -xvzf riak-data-migrator-0.2.9-bin.tar.gz
cd riak-data-migrator-0.2.9
java -jar riak-data-migrator-0.2.9.jar -d -r /var/riak_export -a -h 127.0.0.1 -p 8087 -H 8098
(source: https://github.com/basho-labs/riak-data-migrator)
EDIT
Another way to export riak db https://www.npmjs.com/package/riak-bucket-exporter
#!/bin/bash
for bucket in $(curl http://localhost:8098/riak?buckets=true | sed -e 's/[{}:"]//gi' -e 's/buckets\[//' -e 's/\]//' -e 's/,/ /g')
do
echo "Exporting bucket $bucket"
rm -f $bucket.json
riak-bucket-exporter -H localhost -p 8098 $bucket
done
echo "Export done"
|
I have just dumped a riak db (back-up). But the backup file is a binary file.
Is there a lib that it deserialize it into a human readable file (JSON w/e) ?
I haven't found anything on google, neither on Stack Overflow.
|
How to deserialize a Riak backup into a JSON?
|
1
Mark Cummins in the AppEngine-GoogleGroup has discovered that it's under Storage > Cloud Datastore > Settings.
Share
Improve this answer
Follow
answered Oct 7, 2015 at 11:27
oulenzoulenz
1,23511 gold badge1616 silver badges2424 bronze badges
Add a comment
|
|
This is the functionality I am referring to in the old style GAE console
At the bottom of this page, there is 'Backup Entities' button
I am not able to find the corresponding function in the new GAE console interface.
Is it possible to manually drive a backup process in the new GAE console as at the time of writing (4 Jun)?
|
Where is the datastore admin in the new Google App Engine console UI?
|
1
In case someone else comes along seeking answers - note that the question has been well answered via discussion in the comments.
It sounds like my practice won't blow anything up but there's a better way: a "Version Control System" or "VCS". I'm going to have to do a little research and pick one before my semi-bad habit gets too ingrained.
Thanks @xantos and @DanielLane!
Share
Improve this answer
Follow
answered Jun 7, 2015 at 14:17
MethodicianMethodician
2,46655 gold badges3333 silver badges4949 bronze badges
Add a comment
|
|
I'm learning C# (self-teaching first real programming language other than VBA).
Consistently, my text book asks me to create new project and add a bunch of existing items from an old project when I don't want to mess up my existing project. This seems to be their way of creating a backup. They never really said not to just copy folders so I've been doing that and it works fine.
The IDE doesn't allow you to save a whole project with a new name (i.e. Save As: "BACKUP Of projectName") so instead I close the IDE and just copy the folder. It's been a great time saver rather than following their laborious instructions but I fear that I'm teaching myself a bad habit. Please tell me my fears are unfounded.
|
Is it a bad idea to copy project folders for backup? Add existing items instead?
|
@echo off
setlocal enableextensions disabledelayedexpansion
set "source=%cd%\source"
set "target=%cd%\target"
for %%a in (DB_Live DB_Test) do (
set "first=1"
for /f "delims=" %%b in ('
dir /a-d /tw /o-d /b "%source%\%%a_*.bak"
') do if defined first (
set "first="
copy /b /y "%source%\%%~b" "%target%\%%a%%~xb"
)
)
For each set of files, execute a dir command in reverse modified date order. In this list the first file is the last modified. Copy this file to the target overwritting the existing file (if present).
|
I have a backup program that saves .bak files into a folder which it rotates on a weekly basis automatically.
The files are given given names like this:
DB_Live_19052015.bak
DB_Test_19052015.bak
DB_Live_18052015.bak
DB_Test_18052015.bak
The backup program doesn't allow me to edit these names and I actually don't want it to.
What I do need is to be able to copy the newest file of each set DB_Live_XXXXXXXX.bak & DB_Test_XXXXXXXX.bak and rename them to drop the date so I end up with files like this for DR:
dr/DB_Live.bak
dr/DB_Test.bak
This would be overwritten each time the script was run.
No I can copy the latest file in a folder and rename it using scripting but I cannot get my head round how to
A. get the set of latest files (multiple)
B. rename these files based on their original names to drop only the on the end.
What I am expecting to have to do is the following:
copy the latest files to a dr folder
get the file names for each file
rename the files and overwrite anything already there with that name
I'm going to be adding this scripting into the backup program so it runs when the backup has finished.
The reason for these files is so I can RSYNC them off site without sending the whole file every time.
|
Windows batch script to copy newest files with partial names
|
1
The short answer is "yes" rsync does mostly avoid copying the range i+s to n+s. It breaks up the file on the sending side into blocks and calculates a checksum for each block. Then it iterates over the receiver side file using a rolling checksum. That way if the block that exists on the sender side exists anywhere on the receiver side it won't be copied again.
This allows the offset (i+s .. i+n) between blocks to be any size. The only data in the range i+s to n+s that would be copied again is the data inside a block that has been modified. Thus the re-copied data is a function of your block size, which is dependent on file size if you don't specify it using --block_size. The worst case if you insert data in one location is that two blocks mostly containing data that exists on the receiver side already are copied over.
Share
Improve this answer
Follow
answered May 1, 2015 at 18:00
BobBob
1,78933 gold badges2222 silver badges3535 bronze badges
2
What if s%block size > 0 and the blocks don't align for i+s..n+s? Edit: never mind, the rolling checksum covers any alignment.
– Michael Taylor
May 4, 2015 at 14:02
1
Right, the only thing s%block size != 0 affects is whether you have to re-transmit data that is in blocks on the edge of the new data.
– Bob
May 4, 2015 at 14:59
Add a comment
|
|
Suppose I have a file of size n, which has been replicated to another location with rsync.
Source
|-------------------------|
0 n
Destination
|-------------------------|
0 n
In the source file, s bytes are inserted at position i.
Source
|----------|-----|--------------|
0 Same i i+s Same n+s
Destination
|----------|--------------|
0 Same i Same n
Does rsync generally avoid copying the range i+s...n+s since it's the same as i...n in the destination?
If so, what are the limits on i and s before rsync has to copy a significant amount (or all of) i+s...n+s?
|
Is rsync efficient at copying files that grow from the middle?
|
I apologize this happened.
In the future the best way to test this out is to restore the database to a different name but keep the original name with the original database.
Renaming causes a few different operations to run in the background to make the rename happen and I know at least one of those will break the backup chain.
|
This afternoon a Sitefinity application broke. Our client had been madly updating content in preparation for imminent go live. We quickly decided that restoring the SQL Azure database would be the fastest way to fix things.
Something I had done in the last 10-20 minutes broke the application. The client had been updating content during this time. To avoid losing too many content updates, I thought I would try restoring to 10 minutes prior, then if that didn't work, try 20 minutes.
I used the Azure portal to restore to a new database from 10 minutes prior. This finished in about 5 minutes. I stopped the application, renamed the original database _latest, then renamed the restored database to the original name, then restarted the app.
Unfortunately the problem was still present, so I thought I would then try restoring to 20 minutes prior.
The problem is, after I have renamed the databases, all the point in time restore data is gone - from both the original and the restored!
I tried renaming the _latest database back to it's original name, but still there is NO restore data available!
So, I'm wondering what procedure should I follow to restore a database without losing the restore data?
|
What's the right way to restore using SQL Azure point in time restore?
|
Finally i got a solution like this. And its work
/**Thank god, hopefully this will backup the database*/
public void backupDatabase(Connection conn) throws SQLException {
LOGGER.info("Inside backupDatabase");
//used to get DB location
String path = appConfigReader.getDbAbsolutePath();
Statement stmt = conn.createStatement();
//used to get DB url
String dburl = AppConfigReader.getInstance().getDatabaseUrl();
String[] bkp = {"-url", dburl, "-user", "sa", "-password","sa", "-script", path+"/myApp001.sql"};
Script.main(bkp);
LOGGER.info("backupDatabase method executed successfully");
}
|
I have an application, I need to back up the DB as .sql file. I have the following code but it backups as .zip
public void backupDatabase(Connection conn) throws SQLException {
//to get absolute path
String path = appConfigReader.getDbAbsolutePath();
Statement stmt = conn.createStatement();
String sql = "BACKUP TO '"+ path+"myApp001.zip'";
stmt.executeUpdate(sql);
}
Is there any ways to copy .sql file to a location inside my server itself. I know there is commands, but i need java code. Is there any way to do this with
'org.h2.tools.RunScrip' or 'Scrip' ? or any other method
|
Backup from H2 database as .sql file
|
Google Drive is included in the Reports API of the Google Apps Admin SDK. It provides similar information to the Google Drive Audit Log, but with additional metadata. That includes the parent folder ID of files which were removed.
To restore the files you should first query the Reports API for files removed by the user in question over the relevant time period, using the Activities:list method.
Then you'll need to setup a Google Apps service account (which is a little confusing), to allow you to impersonate the owners of the documents that were removed.
Lastly, you can iterate over the event report for the removed files and use the Files: patch method in Google Drive REST API, to re-add the parent ID's to each of the files.
See Gist Using Google Drive API to restore files removed from shared folders
for example of the last step.
|
A non-privileged Google Drive user has accidentally removed a large number of files from folders shared across an organisation. They do not have permission to delete the files entirely, because they are not the owner. However, users with edit permissions are able to remove a file from a shared folder. This returns the user to the owner, but seems to leave the file orphaned without a parent folder.
The files were owned by various different users.
How do I restore these files to their correct folders? The Google Drive Audit Log does not contain enough information to restore the folders correctly - the parent folder ID is not included with the "Remove from folder" event.
|
How do I restore deleted documents from shared Google Drive folders?
|
stat prints 3 different times:
Access - the last time the file was read
Modify - the last time the file was modified (content has been modified)
Change - the last time meta data of the file was changed (e.g. permissions)
This explains why the Change time differs between a/f to b/f (metadata was updated),
while the Modify time is the same (file's content didn't change upon cp).
File: `a/f'
...
Access: 2015-04-05 16:15:22.000000000 +0300
Modify: 2015-04-05 16:15:13.000000000 +0300
Change: 2015-04-05 16:15:13.000000000 +0300
File: `b/f'
...
Access: 2015-04-05 16:15:22.000000000 +0300
Modify: 2015-04-05 16:15:13.000000000 +0300
Change: 2015-04-05 16:19:49.000000000 +0300
|
I'll provide the following script to reproduce the problem:
mkdir a
touch a/f
sleep 1
cp -a a b
stat --printf="%u %g %a %z\n" a/f
stat --printf="%u %g %a %z\n" b/f
The result for the two stat calls will differ in the timestamps:
1000 100 644 2015-04-05 10:53:35.736399836 +0200
1000 100 644 2015-04-05 10:53:36.740399841 +0200
But the manual of cp tells, that -a should preserve the timestamps.
What am I doing wrong?
How can I ensure timestamps are kept at the copy in a way I can test for it?
I tried this at Xubuntu 14.04. Thx for any help!
Ps (Important):
I just tried to access timestamps over ls, there I don't have the same behavior:
$ ls -l --full-time a/
-rw-r--r-- 1 foo bar 0 2015-04-05 10:53:35.736399836 +0200 f
$ ls -l --full-time b/
-rw-r--r-- 1 foo bar 0 2015-04-05 10:53:35.736399836 +0200 f
Am I checking the wrong thing with my stat command? I want to find out if a file as been "changed" by comparing it to a copy in the backup...
|
"cp -a" (Copy in Archive Mode) does not influence "stat" command in "Time of last change"
|
SQL Azure Database to store relational data.
SQL Azure Databases are backed up by default. To learn more about it, you may find this link useful: https://msdn.microsoft.com/en-us/library/azure/jj650016.aspx.
Azure Blobs to store images and videos.
Azure blobs are not backed up by default. They are replicated 3 times in the same region and you can enable geo-replication and then the contents of the blob are replicated to a storage account in another region which is at least 400 miles away but in the same geographical region (e.g. US East Storage account would get replicated to US West region). BUT replication is not backup! To backup, you could simply use tools like AzCopy to manually copy contents of your storage account to another storage account for backup purposes.
Web/Worker Roles
Since web/worker role are deployed as packages and the packages are first uploaded into blob storage, you can make use of same backup mechanism that you would use to backup your blob storage contents. If you want to get a copy of the package/config file, you can use Get Package functionality.
|
This is my architecture at the moment:
SQL Azure Database to store relational data.
Azure Blobs to store images and videos.
Azure Web Role to host WCF services (Available internally and as an external API)
Azure Web role to host a web application.
Little bit lost on what I need to manually backup? Can someone point me in the right direction?
Any links would be greatly appreciated to on backup / disaster recovery.
|
Windows Azure - What is backed up automatically? and what do I need to backup manually?
|
1
Yes, you can create backups from P4V using custom tools instead of the P4Admin GUI. See the following article 'Server Backup in P4V':
http://answers.perforce.com/articles/KB/3346
There is an existing enhancement request for a user interface in P4V Admin to setup backups. I will add your post to this request.
Share
Improve this answer
Follow
edited Mar 19, 2015 at 20:42
answered Mar 19, 2015 at 20:23
P4ShimadaP4Shimada
75844 silver badges88 bronze badges
Add a comment
|
|
Is it possible to create a backup of perforce using the p4admin GUI?
|
Creating a Perforce backup using the p4admin GUI
|
1
The transaction log is a serial record of all the transactions that have been performed against the database since the transaction log was last backed up.
So, if there were no changes in the database between transaction log backups the First and the Last LSN will be the same.
Share
Improve this answer
Follow
answered Mar 29, 2016 at 9:43
user6128263user6128263
Add a comment
|
|
When restoring SQL Server Transaction Log files, I have noticed a number of log backups taken over night that have the same First and Last LSN.
Do these files have to be restored as part of the chain or can they be skipped?
|
First LSN and last LSN the same in transaction log backups
|
1
Most databases have some remote-backup service available, so I'd look into that first.
That said, you could use a library that simplifies secure-shell operations. One of those is Fabric which was based on paramiko.
Fabric was designed for things like remote backups (or deployments). You want to look specifically at it's get operation capabilities.
Share
Improve this answer
Follow
answered Mar 2, 2015 at 9:48
Reut SharabaniReut Sharabani
30.8k66 gold badges7171 silver badges9292 bronze badges
Add a comment
|
|
I tried to run remote command to back up my server db, after dumbing it I can't figure out how to remotely get the datas.
https://docs.python.org/2/library/subprocess.html
Found some documentation, but that didn't really helped me.
|
How to run remote shell command using python script
|
1
Just take a new checkpoint and you'll be fine. Each checkpoint stands on its own as a snapshot of the database at the point when it was taken. The journal files fill in the time in between checkpoints.
To recover the database from a disaster, all you ever need is the last checkpoint plus the current journal file. If you've lost the last checkpoint somehow but you have an older checkpoint plus the intervening journal files, you can use the journals to catch up:
checkpoint.n + journal.n = checkpoint.n+1
Hence once you take a new checkpoint, everything before it becomes redundant from a recovery perspective.
When you create checkpoint.n, the current journal is rotated and becomes journal.n-1, filling in the operations between checkpoint.n-1 and checkpoint.n. The current journal starts over from scratch recording everything that's happened since checkpoint.n.
Share
Improve this answer
Follow
answered Mar 1, 2015 at 3:30
SamwiseSamwise
69.4k33 gold badges3232 silver badges4747 bronze badges
0
Add a comment
|
|
I made my first Perforce checkpoint and deleted the folder by accident. What should I do? Will creating a new checkpoint create a gap in the "chronology"? Can I make a checkpoint that is not reliant on previous checkpoints? Apologies about any ambiguity, I am new to Perforce server management. Thanks
|
Deleted a Perforce Checkpoint
|
This is a known issue (OPSC-4587) that will be fixed in OpsCenter 5.1.1 and should be out very soon. Unfortunately in the meantime there is no workaround other than downgrading to OpsCenter 5.0.2
Update: 5.1.1 just dropped and includes the fix! Release Notes
|
we're using datastax DSE 4.6.1 with opscenter 5.1.0 on EC2 (4 nodes running on m2.4xlarge)
we setup the new backupservice, and it worked, but now i get an error:
"Snapshot of all keyspaces on node xxx.xxx.xxx.xxx failed:
clojure.lang.Compiler$CompilerException: java.lang.ClassFormatError:
Invalid method Code length 116786 in class file clojure/core$eval73,
compiling:(NO_SOURCE_PATH:0:0) (xxx.xxx.xxx.xxx)"
any ideas? it won't work neither on local backups nor on S3.
thanks
|
datastax backup service not working
|
GIT Workflow can be a confusing thing to set up. The important thing to understand is that git "push" and "pull" work with "remotes". You are best served with a single remote in a secure location (backups can be done separately using the '--mirror' option, but don't use it for committing).
Step one; Create a "bare" repo where all development will end up;
git init --base /path/to/repo/websiterepo.git <- this is a bare repo..
Do not use the bare repo for ANYTHING except pushing to and pulling from. Dont manipulate it, don't "git add" or "git commit" to it and you will avoid damaging it.
Step two; Convert your web site to a git repo, and make the bare repo available for pushing and pulling;
cd /path/to/website
git init
git add .
git commit -m 'Initial Commit'
git remote add origin /path/to/repo/websiterepo.git
git push -u origin master:master
Step three; Create a "backup" on the "backup server"
cd /somewhere/safe
git clone --mirror user@website:/path/to/repo/wepsiterepo.git
(in future, use the following commands to refresh the backup)
cd /somewhere/safe/websiterepo.git
git remote update
Step four; Set up your DEV location
cd /home/me/websitedev
git clone user@website:/path/to/repo/websiterepo.git website
cd website
git checkout master
git checkout -b devbranch <- work on dev branch
git add changes
git commit -m 'my changes'
git push -u origin devbranch:devbranch
Step five; Set up your TEST location
cd /home/me/websitedev
git clone user@website:/path/to/repo/websiterepo.git website
cd website
git checkout devbranch
Workflow..
Make changes in /home/me/websitedev/website
git add changes, git commit, git push (will push to devbranch)
Got to test machine; git checkout master; git pull; git merge devbranch; performing testing (if git pull doesn't pull down devbranch changes, use git fetch --all)
Update production repo; got push origin master:master
Update production; git pull
I hope this helps.
|
What we are trying to do:
We are trying to use git to create an efficient development process as well as keep a current backup of our website. We have three machines in the loop:
production: Hosted on remote webserver - LAMP, git 2.0, centos
back-up: local workstation - windows, git Shell
development/testing: An on-site webserver - LAMP, git 2.0, centos 6
We want to clone a bare repository down from the Production server to my Workstation to keep a current backup.
I want to create development branches on my local machine to make changes to files which I want to pull into a repository on the Development/testing server. This way I can use this repository as the test step to merge things and be sure they work properly.
Once everything works I want to pull the new branch over to the Production server and merge to complete the update to the live site. Then once that is in place, the next backup would catch the last updates.
The Problem:
We have tried starting with a --bare repo on the production server and cloning it down to the other two machines. This works to get the repo on the machines, but doesn't pull any files that have been added and committed on the --bare repo. We also can't push changes up to the production server.
We have also tried just using git init on the production server, but the problem persists. We are using the following syntax:
git init <repo> or git init --bare <repo>
git add .
git commit -m "message"
git clone ssh://[email protected]:/path/below/repo.git or repo/repo.git
git remote add <name> <repo>
git pull <remote>
git fetch <remote>
The Question:
Are we not using git properly as production tool, or are we making some syntactical or other error in using git?
|
Git: can't get files to push up to repository in three machine environment
|
1
I don't believe there's a backup agent for Linux. You would use your standard backup/restore strategy here, for example rsync if it's just files or Bacula for something else. However, if the files absolutely need to be in the vault (say, because there are Windows Server VMs that need to use them) then I would suggest you use Azure Files to get the files out of Linux, then back them up from the Windows VMs. You can of course scp them, or use other methods. HTH.
Share
Improve this answer
Follow
answered Feb 11, 2015 at 7:07
bureadobureado
14922 bronze badges
Add a comment
|
|
I have been tasked with backing up certain files the exist on a Linux VM in azure to an azure backup vault.
I'm following the follwing documentation :-
http://azure.microsoft.com/en-gb/documentation/articles/backup-configure-vault/
However i can't see a backp agent for a linux box?
Am i missing something?
T
|
Backup files from linux vm in Azure
|
1
Keep in mind that Redis is a single-threaded event loop. A transaction is applied atomically when the EXEC command is executed. So either the RDB background save process is forked before the EXEC, or after the EXEC. You can consider that the fork takes an instantaneous snapshot of the memory of Redis.
If the EXEC is applied before the fork, then your transaction will be in the resulting dump. If the EXEC is applied after the fork, your transaction will not be in the dump, even if Redis takes minutes to generate it. Nothing will be delayed (neither the transaction, neither the dump).
On a side note, except if your database is tiny, doing a dump every few minutes is probably too heavy. Perhaps you should consider using the append-only file instead.
Share
Improve this answer
Follow
answered Feb 4, 2015 at 9:06
Didier SpeziaDidier Spezia
71.9k1212 gold badges191191 silver badges155155 bronze badges
Add a comment
|
|
Say i have set my Redis backup with snapshot every few minutes, and unlucky when a snapshot is triggered, a transaction (use pipeline) is in the process. How does Redis deal with this situation? Will snapshot be delayed until transaction finished? Or save the first part of transaction? Or exclude the whole transaction until next backup?
|
What if Redis triggers a snapshot in the middle of transaction?
|
That was rather painful. It seems that the way to fix it was to dump django_content_types as csv from the production posgres database, delete the IDs from the resulting csv file, then do the following on the SQLite database for the test version:
CREATE TABLE temp_table(a, b, c)
.mode csv
.import content_type.csv temp_table
DELETE FROM sqlite_sequence WHERE name = 'django_content_type'
DELETE FROM django_content_type
INSERT INTO django_content_type(name,app_label,model) SELECT * FROM temp_table
That had the effect of setting the ids of the entries in the django_content_type table to match those in the dump, allowing the restore to proceed.
|
I've been running a daily dump of a production Django application as follows:
./manage.py dumpdata --exclude=contenttypes --exclude=auth.Permission -e sessions -e admin --all > data.json
Normally, restoring this to another installation for development hasn't caused a problem, but recently attempts to restore the data have caused this:
./manage.py loaddata -i data.json
django.db.utils.IntegrityError: Problem installing fixtures: The row in table 'reversion_version' with primary key '1' has an invalid foreign key: reversion_version.content_type_id contains a value '14' that does not have a corresponding value in django_content_type.id.
This suggests to me that the problem has been caused by the recent addition of django-reversion to the codebase, but I am not sure why and I have not been able to find any means of importing the backup. Some posts suggest that using natural keys may work, but then I get errors like:
django.core.serializers.base.DeserializationError: Problem installing fixture 'data.json': [u"'maintainer' value must be an integer."]
"maintainer" is in this case a reference to this bit of code in a model definition in models.py:
maintainer = models.ForeignKey(Organization,related_name="maintainer",blank=True,null=True)
Does anyone has any suggestions as to how I might get this dump installed, or modify the dump procedure to make a reproducible dump?
I note that the production site is using Postgres and the test site has SQLite, but this has never been a problem before.
|
Restoring Django dump
|
1
::
:: Remarks :
::
Using /PURGE or /MIR on the root directory of the volume will
cause robocopy to apply the requested operation on files inside
the System Volume Information directory as well. If this is not
intended then the /XD switch may be used to instruct robocopy
to skip that directory.
Share
Improve this answer
Follow
answered Jan 29, 2015 at 1:51
Randy CastleberryRandy Castleberry
5655 bronze badges
Add a comment
|
|
I am trying to use robocopy to backup my entire hard drive to my external hard drive using the following code
ROBOCOPY C:\ "D:\HD Backup" /e /mir /tee /mt:4 /A-:SH /log:C:\Users\Aaron\Desktop\backup_log_HD.txt
However it takes my 170GB from my hard drive and it was up to 400GB before I stopped it
|
Backup Hard drive with robocopy
|
1
There is a way to setup a special backup configuration file on your other instances that would allow you to directly access the Production S3 bucket from another environment within the same account. There is some risk involved with this since it would also technically allow your non-production environment the ability to edit the contents of the production bucket.
There may be some other options depending on the specifics of your configuration. Your best option would be to open a ticket with the Engine Yard Support team so we can discuss your needs further.
Share
Improve this answer
Follow
answered May 29, 2015 at 14:51
tpoltpol
30111 silver badge66 bronze badges
1
eyrestore is a recently released tool designed to help address this use case within Engine Yard in a safe fashion. Its available on recent versions of the Stable-v4 and Stable-v5 stacks.
– tpol
May 22, 2017 at 17:08
Add a comment
|
|
We have a couple of environments in Engine Yard. Each of them runs the same application, but on different stages: production, staging, etc. In total about 10 environments. Now, we want to dump the production database every night, and restore it on the rest of environments to have the latest data.
The problem is, an instance from one environment can't access instances in other environments. There are two ways to connect that are suitable for us:
SSH.
Specify the RDS host as the --host parameter to mysqldump. The RDS host is of the form environment.random_string.region.rds.amazonaws.com as opposed to a regular EC2 host name.
Neither of them works out of box. The straightforward solution would be to generate RSA keys on all the servers that want access, and add them to authorized_hosts to all the servers that should allow access. However, this solution isn't scalable: once we add or recreate an environment we'd need to repeat process.
Is there any better solution?
|
Access one environment from another in Engine Yard
|
The easiest way to do this would be to find an existing IPS implementation in the form of either:
A pre-existing library plus API
A pre-existing project
Fortunately for you, Neill Corlett (the wonderful guy who made the English translation of "Secret of Mana 2" / "Seiken Densetsu 3" a reality) has already created an openly available implementiation of the UPS and IPS patch generation/application algorithms. Please refer to the associated readme file to determine the licensing terms.
Anyhow, you can just modify the main() function in the program to select two particular file names (his existing program is already designed to accept them as command-line arguments), and the work is passed off the the existing functions that handle patch generation/application.
Good luck!
|
I am helping develop a program that makes an easy way to backup a specific file format in the form of an ips patch. We want it to compare 2 files, one in the program folder with a specific name(for ease) and then another that's user selected, which will then produce a patch with the current date/time as the file name. We are coding in C. Are there any tips on to how we can manage this?
|
Making a backup creator
|
A full backup only contains the portion of the log that was generated during the backup. Should be very small.
If you enable simple recovery that will throw away all logs that are not backed up and break the log chain. Is there a reason to be in full recovery mode? If yes you should probably make yourself more familiar with how to not break the log chain.
|
I have a database "DBName" on SQL Server 2008. I want to take backup of it without logs(.ldf file). Because this log file is around 20 GB and I don't want to increase the size of backup file.
I also want to do this without truncating logs from current Live database.
Meaning, the backed up copy shouldn't contain transaction logs but the live database "DBName" should remain unaffected.
P.S. - I am taking backup through following script. Variables are set from UI in WPF.
exec('BACKUP DATABASE '+ @DBName +' TO DISK ='''+ @DBBackupPath +'''')
Thank you.
EDIT
Should SQL Server Simple Recovery Model help ?
|
Taking SQL SERVER Database backup without Logs
|
See here to iterate over the array keys. This way you do not need to nest the loops: $i will iterate over 0 1 2, you just need to make sure both arrays have the same number of elements.
#/bin/bash
WEBSITES=(A B C)
DATABASES=(X Y Z)
echo "debug: ${!WEBSITES[@]}"
for i in "${!WEBSITES[@]}"; do
site=${WEBSITES[$i]}
db=${DATABASES[$i]}
echo tar $site
echo mysqldump $db
done
results in:
debug: 0 1 2
tar A
mysqldump X
tar B
mysqldump Y
tar C
mysqldump Z
|
I am attempting to make a backup script for my websites but i am having issues with a nested for loop
BACKUP_DIR="/path/to/output"
WEB_DIR="/srv/http"
WEBSITES=($WEB_DIR/website_one $WEB_DIR/website_two)
MYSQLDBS=(database_one database_two)
for WEBSITEBACKUP in $WEBSITES
do
# tar commands here for website directories
for DATABASEBACKUP in $MYSQLDBS
do
# mysql dump commands here for databases
break
done
done
I was hoping that loop 1 would backup the website, then open loop 2 which would backup the database then break out of the inner loop and continue to backup website 2 but once it gets to the inner for loop the second time it backs up the first database again.
My question is, how can i get the nested loop to increment until all databases in the array have been backed up successfully, or is there another way i have overlooked?
For anyone who is wondering, the reason why the databases aren't being backed up in their own for loop is because i am getting the folder name from $WEBSITEBACKUP and i would like the store the databases in the same directory as their website.
CURRENT_BACKUP=`echo $WEBSITEBACKUP | sed "s|\$WEB_DIR||g" | tr "/" "-" | cut -b2-`
|
Nested Loop with increment on inner loop?
|
The .dump command reads the contents of the database normally, just as if you would do a bunch of SELECT queries inside a transaction.
This means that when not using WAL, other connections cannot write as long as the dump is running.
|
I'd like to know how the .dump command affects other applications connected to the same database. I'd like to know this for the following journal modes:
DELETE (the default mode)
WAL (write-ahead-logging)
From reading other posts on this forum .backup uses the online backup API of SQLite. It would be great to have this confirmed as well.
Thanks in advance!
|
sqlite - How does the command tool command .dump affect connected applications?
|
1
Helps to know what phone you have. You could look for the key sequence needed to boot into recovery mode. I'd look into flashing Clockwork Mod recovery which can mount the phone over USB while in recovery. There you could copy a new ROM zip to flash over the messed up Lollipop rom.
All you have to do is format /system and install a known working ROM. Also, Clockwork mod recovery can make a nandroid backup which can be looked at using Titanium Backup so you can extract your data that way.
Share
Improve this answer
Follow
answered Dec 17, 2014 at 6:28
user4317867user4317867
2,41544 gold badges3232 silver badges5858 bronze badges
Add a comment
|
|
I have - stupidly enough - updated my android phone to the newest 5.0 Lollipop. The update went bad and many hours later the phone is still unresponsive. I can only enter in a recovery mode. I have seen in forums that many other users had the same problem and it turns out that the only way out is factory reset of the phone. This means total loss of phone data. As I have no backups (as far as I am concerned) I was wondering if there is a simple way to transfer the data from phone to computer (Ubuntu) while phone being in recovery mode.
This is what I tried already:
started the phone in recovery mode
selected the 'apply update from ADB' (as this is apparently the only way computer sees the phone through $ adb devices)
I type
$ adb devices
List of devices attached
02452a1acc14c9d4 sideload
and then
$ adb backup -apk -shared -all
adb: unable to connect for backup
My phone was not rooted, but I am pretty sure the USB debugging was activated. Any ideas? Is it possible at all to backup phone in recovery mode?
Thanks in advance.
|
Backup android to ubuntu from recovery mode
|
Use tar.
You should be able to get your symlinks on the target machine. You may however to have to replace some of them if you moved the root directory underneath some mount point.
In that case you can find the symlinks with
tar tvf the_backup.tar | egrep -i 'symbolic link to /| -> /'
(or the equivalent tar tvf uses to display symlinks on your target machine)
This will show all symlinks pointing to "absolute" links (links starting with "/"), ie the most probable troublesome symlinks if you untar your backup underneath a subdir.
ex:
$ tar tvf the_backup.tar | egrep -i 'symbolic link to /| -> /'
?rwxrwxrwx 4000 500 0 Nov 05 17:00:00 2014 foo symbolic link to /bar
Note that:
some tar (GNU tar, for ex) uses " -> xxx" to indicate a symlink to xxx
some uses " symbolic link to xxx" instead (and other variation could occur ...)
some (old) tar won't get rid of the starting '/' so be careful : if you use thos old version to untar on the target machine, file "/something" will go to "/something" instead of "/subdir_where_you_wanted_to_extract/something" ... It can hurt a lot. try on a small tar containing just "/dummyfile" first
|
I'm trying to download a copy of my entire server (Linux with cPanel installed, running CentOS).
What I originally did was tar the entire server and then download the tar file. The problem with this was that when I went to unzip it on my home computer to make sure everything was there, I got a lot of error in WinRAR saying that it couldn't extract certain files.
After investigating it further I realized that the files it couldn't extract were symlinks.
I don't necessarily need to be able to use symlinks on my home computer, but it would be nice to have a backup of them in case they need to be re-created. If I were to unzip the file onto a Linux server, would the symlinks still work? so I could use them as reference in the future so know which files were linked to which?
Any advice on this would be greatly appreciated!
|
How to backup entire server with symbolic links present?
|
1
I don't know bacula or what its needs are regarding erasing tapes, but the normal way to erase a tape on Linux is:
mt -f /dev/????? erase
Maybe a physical erase is not what is needed by bacula - maybe it needs an entry in a tape management database deleting or resetting. Maybe it needs to have a label written at the start of the tape - which will also erase the rest of the tape's contents.
Share
Improve this answer
Follow
answered Oct 30, 2014 at 13:38
Mark SetchellMark Setchell
199k3232 gold badges289289 silver badges452452 bronze badges
2
Thanks for helping me, but is there any way for me to erase all the tapes?
– Gustavo Filgueiras
Oct 30, 2014 at 13:44
How many do you have? Why can't you put them in your tape drive(s) one at a time and run the command above? Or do you need a physical bulk eraser machine, also known as a degausser?
– Mark Setchell
Oct 30, 2014 at 13:46
Add a comment
|
|
I'm using bacula as backup tool, but as I am not responsible for the process does not know how to reset the tapes to start over.
|
How to erase the data on the tapes of the bacula
|
The problem was not with the script or password.
The job is scheduled to run every month, but someone ran a backup mid month - without a password, and replaced the backup file with their (unsecured) backup.
This caused the job to fail, as SQL Server (clearly) checks the password of the backup before writing over it. (which is interesting)
|
I have a job that backs up a production database saves it to disk with encryption.
BACKUP DATABASE MyFreshDB
TO DISK='\\HomeServer\data\MyFreshDB.bak'
with copy_only, init,MEDIAPASSWORD='8888'
But I get this error:
Msg 3279, Level 16, State 4, Line 1
Access is denied due to a password failure
Msg 3013, Level 16, State 1, Line 1
BACKUP DATABASE is terminating abnormally.
The script above has been working fine for months, and removing the mediaPassword statement allows the backup to go ahead with no problems.
Any ideas?
|
SQL Server Backup failing when using MediaPassword
|
Found out that the problem was that the s3cmd configuration file was not found when running s3cmd.
So the fix was simply to copy this .s3config file to a safe shared folder, and then call the s3cmd with the "--config" parameter followed by the file.
Like this:
/share/maintenance/s3-backup/s3cmd/s3cmd --config
/share/maintenance/s3-backup/s3cmd.config --rr sync -rv /share/MD0_DATA/ s3://xxx-backup/xxx-nas/ >> /share/maintenance/s3-backup/logs/backup_`date "+%Y%m%d-%H-%M"`.log 2>&1
|
I own a QNAP-219P and I want to set this up manually using s3cmd.
I did quite a bit of research on this, and here are the references I got:
http://web.archive.org/web/20091120211330/http://codemonkeybrown.com/qnaps3.html
http://wiki.qnap.com/wiki/Running_Your_Own_Application_at_Startup
http://wiki.qnap.com/wiki/Add_items_to_crontab
http://blog.wingateuk.com/2013/03/cloud-backup-on-qnap-nas.html?showComment=1413660445187#c8935766892046800936
I'm trying to get the s3cmd to work on my TS-219P.
I got everything to work (on command line), even running the script file (s3-backup.sh) on command line:
#!/bin/bash <-- I also tried #!/bin/sh
/share/maintenance/s3cmd-1.5.0-rc1/s3cmd --rr sync -rv /share/all-shared-folders/emilie/ s3://kingjim-backup/kingjim-nas/emilie/ >> /share/maintenance/log/s3cmd/backup_`date "+%Y%m%d-%H-%M"`.log <-- I also tried running s3cmd via python by adding /usr/bin/python on the front.
If I run using the SSH command prompt, it seems to work perfectly.
The problem though, is the cronjob. I can confirm the cronjob trigger, and it was run, because my log file (the one above) was generated, but the log is always empty, even though I'm sure there are some new files created/modified.
This is my cronjob task:
14 3 * * * /share/maintenance/s3-backup.sh 2>&1 | logger
I've done a number of different variations on the above, but couldn't find out what was missing.
I feel like some dependency is missing when the crontab is running, as compared to when I run it on command prompt. But I don't know how to debug crontab.
|
Setup Amazon S3 backup on QNAP using s3cmd
|
1
Forgot to reply back on this. Sorry all. I looked at the exception and all the service accounts used by our TFS (which I didn't setup) and ensure it maps to the proper sql permissions. All in all, I granted our TFSService account SysAdmin rights on the sql box and that fixed the problem!
Ty
Share
Improve this answer
Follow
answered Oct 9, 2014 at 15:51
DonDon
3355 bronze badges
1
related to social.msdn.microsoft.com/Forums/silverlight/en-US/…
– sorosh_sabz
Aug 12, 2022 at 14:28
Add a comment
|
|
I couldn't find any documentation on this error. I've inherited TFS server administration and gotten this error message when configuring scheduled backup: TFS 2013 Update 2
"TF400975: Failed to grant TFS Job Agent permissions to start database backups on SQL Server xxxxxx"
TY in advance
|
TFS 2013 : TF400975: Failed to grant TFS Job Agent permissions to start database backups on SQL Server
|
use set backupcmd=/W /E /H /V /C /Z /I /F /J /R /Y
instead of set backupcmd=xcopy /W /E /H /V /C /Z /I /F /J /R /Y . You have redundant xcopy in parameters.
EDIT. As far as I understood your comments you need a new folder like this "S:\Internal Auditor\%date:~5,2%-%date:~8,2%-%date:~0,4%"
so you can do this:
set SRCFOLDER="S:\Internal Auditor"
set "DESTFOLDER="S:\Internal Auditor\2014"
set "folder=%date:~5,2%-%date:~8,2%-%date:~0,4%"
md "%DESTFOLDER%\%folder%" >nul 2>&1
set "backupcmd=/W /E /H /V /C /Z /I /F /J /R /Y"
echo ######## PLEASE WAIT SYSTEM BACKINGUP SOME DATA########
xcopy "%SRCFOLDER%\%folder%" "%DESTFOLDER%\%folder%" %backupcmd%
echo !!!!!!!!BACKUP COMPLETED THANKS!!!!!!!!!!!!!!
|
I am trying to write a bat file to backup a folder on my work server (sometimes the server and backup server do not sync correctly and files go missing).
I have tried many different solutions and read a few different forums to try to resolve this, but I cannot seem to find anything.
@echo This will now create a new backup of S:\Internal Auditor\9 - September 14
@echo off
:: variables
set SRCFOLDER="S:\Internal Auditor\9 - September 14"
set DESTFOLDER="S:\Internal Auditor\2014\9 - Sept Backup"
set folder=%date:~5,2%-%date:~8,2%-%date:~0,4%
set backupcmd=xcopy /W /E /H /V /C /Z /I /F /J /R /Y
echo ######## PLEASE WAIT SYSTEM BACKINGUP SOME DATA########
xcopy %SRCFOLDER% %DESTFOLDER% %backupcmd%
echo !!!!!!!!BACKUP COMPLETED THANKS!!!!!!!!!!!!!!
@pause
Please help - I'm tired of losing files, and I don't want to have to manually backup files every day.
(The goal is the create a new folder with date & time every time it runs under the sub-folder "9 - September 14"{historical backup}).
EDIT
Ok - So I have another thread open for something that was different, but now my 2 questions have kinda merged together, so please look @ New folder for every backup CMD and see if you could help...
|
Bat Error "invalid number of parameters"
|
1
Given the listing above showing you the region names, you can use the --region parameter to override where the script is automatically finding the region info. For example:
"aws ec2 create-snapshot --volume-id vol-xxxxxxxx --region us-west-2"
I was having the same issue while trying to create the snapshot (as you see above) and once I got the region name right from the list previously posted, this worked just fine.
Best of luck!
Share
Improve this answer
Follow
answered Oct 7, 2014 at 4:24
Scot McGavinScot McGavin
1111 bronze badge
1
elaborate more on your answer, don't leave it this abstract
– chouaib
Oct 7, 2014 at 4:30
Add a comment
|
|
This one has me a little puzzled, so I figured it might be worth posting here.
I'm trying to take regular snapshots of an Amazon EC2 instance for the sake of making backups. Thankfully, some very smart people have already written a rather nice shell script that does this: https://github.com/colinbjohnson/aws-missing-tools/tree/master/ec2-automate-backup
The idea behind this is that it uses the Amazon AWS CLI tools to call in to Amazon and trigger a snapshot of a given volume. In theory, this works great, however I've run into a bit of a strange problem.
The script above makes the following call:
aws ec2 describe-volumes
This is supposed to return a list of the Amazon volumes. However, it is failing with the following error:
HTTPSConnectionPool(host='ec2.us-west2a.amazonaws.com', port=443): Max retries exceeded with url: / (Cuased by <class 'socket.gaierror'>: [Errno -2] Name or service not known)
I understand why it is throwing this error: the ec2.us-west2a.amazonaws.com endpoint isn't a valid endpoint. It should be ec2.us-west2.amazonaws.com (without the "a" after "west2"). Despite poking around in the shell script, I can't for the life of me figure out where the aws ec2 describe-volumes call is actually getting the endpoint from. If I run the command:
ec2-describe-volumes
I get a valid list of volumes, including the one I'm trying to back up. In my ~/.profile, I have properly set my EC2_URL, EC2_REGION environment variables and made sure to reload them, but am still getting the above error.
Can anyone tell me where "aws ec2 describe-volumes" is actually getting the endpoint?
|
ec2 aws describe-volumes giving HTTPSConnectionPool error
|
To skip objects by type, use the --skip option with a list of the objects to skip. This enables you to extract a particular set of objects, say, for exporting only events (by excluding all other types). Similarly, to skip creation of UPDATE statements for BLOB data, specify the --skip-blobs option.
http://dev.mysql.com/doc/mysql-utilities/1.3/en/mysqldbexport.html
|
Situation:
I have a MySQL database that I would like to backup/export as an sql file.
The database has a single table in it that contains a longblob field.
I want to export the database without the data/contents of the longblob field.
Problem:
Not sure how to export the database without the longblog field's data.
Desired result:
A shell/php script or command that will create a sql file containing that database backup without the contents of the longblog field.
|
Backup MySQL Database without blob data
|
1
The error indicates that the path doesn't exist. Does C:\Temp exist? If so, does the service account have access to the folder?
Other things might be storage is full, etc.
See related question:
SQL server 2008 backup error - Operating system error 5(failed to retrieve text for this error. Reason: 15105)
Share
Improve this answer
Follow
edited May 23, 2017 at 12:05
CommunityBot
111 silver badge
answered Jul 22, 2014 at 3:37
spotticusprimespotticusprime
1133 bronze badges
5
I can actually backup if I'm using local server. But when I'm using our remote server it doesn't work. I tried the query in the SQL Server and it didn't work.
– psyche
Jul 22, 2014 at 3:46
Assuming you have appropriate access, this may do the trick: stackoverflow.com/questions/3942207/…
– spotticusprime
Jul 22, 2014 at 3:58
I mean I need to backup a file using coding not generating script. So, I'm trying the query 'Backup Database...' in SQL Server because I need to use it in C# Coding.
– psyche
Jul 22, 2014 at 4:06
Could you post your query here, or update your question with your query string?
– spotticusprime
Jul 22, 2014 at 4:09
I'd be willing to bet that either C:\Temp doesn't exist, or the service account sql server is running under doesn't have access to that path. Try saving it to a path the service account has access to, or if you have RDP access to the server, see if the path exists and appropriate access exists.
– spotticusprime
Jul 22, 2014 at 4:35
Add a comment
|
|
I want to backup a database from our main server. But this error happens
System.Data.SqlClient.SqlException (0x80131904): Cannot open backup device 'C:\Temp\sample.bak'. Operating system error 3 (The system cannot find the path specified.).
BACKUP DATABASE is terminating abnormally.
at Microsoft.SqlServer.Management.Common.ConnectionManager.ExecuteTSql(ExecuteTSqlAction action, Object execObject, DataSet fillDataSet, Boolean catchException)
at Microsoft.SqlServer.Management.Common.ServerConnection.ExecuteNonQuery(String sqlCommand, ExecutionTypes executionType)
ClientConnectionId:d9c1c173-e60e-4e07-91d6-2ba43b905ff6
Thank you for your help!
The query I tried is
BACKUP DATABASE H2RPDB_v2
TO DISK = 'C:\Temp\test2.bak'
in SQL Server. H2RPDB_v2 is from my remote server.
|
Error in backing up database
|
If your site is not a live site and downtime is not an issue, you can try the below
#step1: take a dump of your db
>mysqldump –-user root –-password=myrootpassword db_test > db_test.sql
#step2: zip the .sql file - this is optional
>gzip db_test.sql
#step3: transfer the file to AWS using .pem file
>scp -i myAmazonKey.pem db_test.sql.gz ec2-user@<ur_ip_address>:~/.
#step4: login to your AWS instance
#step5: unzip the file
>gunzip db_test.sql.gz
#step6: import the db to your AWS mysql instance
>mysql -u username -p password db_name < db_test.sql
|
I want to move my mysql database from rackspace server to aws server.Is there any way to do it easily.It contains more or less 1 million rows of data
|
Moving mysql database from one server to another server
|
I tried to reproduce, but I got the good result...
Your issue seams to be that your double-quotes was not substitutes during the setting of end variable. Could you check you are using simple double-quote (0022) and not an exotic quote like "00AB: left pointing double quotation mark".
|
Best to just tell you the input/output and you will see why it is weird. Should be a quick fix for someone good at this, I took a day just to write this script.
Steps I took:
Run AutoBackup.bat
C:\cygwin\bin\sh AutoBackupShell.sh
pause
AutoBackup.bat calls AutoBackupShell.sh
name=`C:/cygwin/bin/date +'backup_%Y_%m_%d_%H_%M_%S'`
end="_Engine"
name=$name$end
C:/cygwin/bin/cp -r ./Engine Backups/$name
Output is a folder like this: backup_2014_07_16_19_07_14_Engine
Something to note is that on my windows machine those weird question mark boxes look
like these ''' kind of, just more centered vertically.
For reference the output should look like this: backup_2014_07_16_19_07_14_Engine
Computer notes:
Windows 7 64 bit
Using cygwin for sh
Another thing I need to do after I figure out why I get weird characters is how to only
copy files with specific extensions for backup. It's not part of the question but a little direction would help me out.
|
Shell script gives lots of these things ''' as output and it never did before
|
Do you have a user/home directory on that server? You should, so you should just place exclude.txt in your user/homedirectory on that server & run it like this from that directory:
tar -cjvf -X ~/exclude.txt ~/2014.tar.bz2 /pdf/data/pdfnew/2014
The ~/ is a shorthand for your user/home directory so in this case it is explicitly stating, “Read exclude.txt from the home0/home1 directory & write home2 to the home3/home4 directory.
But you also ask this:
Is there a correct way doing this?
There is never one canonical best way of doing something like this. It is all based on your final/end goal. Nothing more. Nothing less. That said, if I were you I would do it like this instead using the home5 option:
home6
The uppercase home7 option allows home8 to internally change the working directory to home9 so you can then create an archive of exclude.txt0 without having to have the whole directory tree retained in the backup. I find this is easier to work with because many times I want to backup the contents of a directory but have no use to retain the parent structure. That way the archive is more like a traditional ZIP archive which I find is easer to understand & work with.
|
My situation is I only have execute permission from some folder:
Lets say, I would like to backup entire folder and exclude some folder and files with exclude.txt
Here is path I would like to backup:
/pdf/data/pdfnew/2014
And I only have permission to execute from this folder (main):
/pdf/data/pdfnew/2014/public/main
I put exclude.txt in same folder which I can execute the command (main)
I execute this command in (main folder):
tar -cjvf -X exclude.txt 2014.tar.bz2 /pdf/data/pdfnew/2014
The result is it still included folder that I dont want to backup.
Is there a correct way doing this?
|
Backup file folder in correct way
|
iCloud will backup the App data. See the iCloud: iCloud storage and backup overview Support Document, but you turn off the App data backup on a per app basis. See the iCloud: Select which iOS apps to back up Support Document.
Basically the way that a backup and restore will work is iCloud will restore the App data, then will download the application from the iTunes Store (I'm not sure how this will work with an ad-hoc app.) Hope this helps.
|
I am developing an iPad App (will not put in App store, just ad-hoc deploy) that would have some documents save at apps Documents folder, and some where I need to enable the iCloud capability for some other function.
If I have no other special configuration (as I don't know), I would like to ask:
Will / Can iCloud or iTunes backup the Documents folder and containing files? (I don't want this)
What will be backup related to the App? (how about Library Folder)
Is iCloud independent per App?
If I restore the app to a new device, what will be preserved of not preserved?
|
Will or Can iCloud or iTunes back up Apps Documents or App Data
|
Test this: it will create a mirror backup and delete files that aren't needed but keep files that already exist, and only copy the different files.
robocopy "Z:\FILES" "C:\BACKUP" /mir /xd "trashbox"
|
Hello not sure where I should place this question but;
I want to run a batch script on a windows machine every night around midnight.
All I want it to do is back up all files and folders on a network drive and copy this to a hard drive on the computer running the batch script. The only thing unsual I want it do is exclude copyying a folder called trashbox
Local Computer
C:\BACKUP\
Network Drive
Z:\FILES\*
exclude Z:\FILES\trashbox
So it needs to;
Remove previous days backup
Start at midnight
Backup all files and folders on Z:\FILES*
Exclude Z:\FILES\trashbox* from copying
Any ideas would be most appreciated!!
|
Scheduled batch script backup folders
|
1
There is a very simple to use python tool that automatically backs up organisations' repositories in .zip format by saving public and private repositories and all their branches. It works with the Github API, if you want to re-download from github the repositories on a server, the tool will be very useful : https://github.com/BuyWithCrypto/OneGitBackup
Share
Improve this answer
Follow
answered Aug 7, 2022 at 9:10
krakenkraken
3122 bronze badges
Add a comment
|
|
I am trying to perform a backup on a local git repo to another location on my git server. I've seen many posts and articles about either putting github backups to the cloud to github repos hosted by githu backed up locally. Here, I'm merely trying to back the local server git repo to another location on the server. This is a unix server.
|
Local github backup
|
You can now backup Azure IaaS VMs while they are running. You can find more information about that feature here - https://azure.microsoft.com/en-us/documentation/articles/backup-azure-vms-first-look-arm/
|
We've got 2x Virtual Machines on Azure running within a single cloud service on a load balanced set.
I want to schedule periodic back ups of the VM images in blob storage; so taking the active VHDs and copying them into a separate backup container.
The question is; can I safely do that whilst the VMs are online, or should I only do this when the VMs are shut down? If we restore from a backup taken when the VMs were online, will there be an issue using the image for a new VM?
|
Should you to shut down an Azure VM to back up?
|
The modified function below should save a backup with datetime of saving included instead of ".BAK". Modified part is commented. Also, posting properly indented helps a bunch ;)
Sub SaveWorkbookBackup()
Dim awb As Workbook, BackupFileName As String, i As Integer, OK As Boolean
If TypeName(ActiveWorkbook) = "Nothing" Then
Exit Sub
Set awb = ActiveWorkbook
If awb.Path = "" Then
Application.Dialogs(xlDialogSaveAs).Show
Else: BackupFileName = awb.FullName
i = 0
While InStr(i + 1, BackupFileName, ".") > 0
i = InStr(i + 1, BackupFileName, ".")
Wend
If i > 0 Then
BackupFileName = Left(BackupFileName, i - 1)
'Modified this part
If Application.Version >= 12 Then
BackupFileName = BackupFileName & "_backup_" & Format(Date, "yyyymmdd") & "-" & Format(Time, "Hhmm") & ".xlsx"
Else
BackupFileName = BackupFileName & "_backup_" & Format(Date, "yyyymmdd") & "-" & Format(Time, "Hhmm") & ".xls"
End If
OK = False
On Error GoTo NotAbleToSave
With awb
Application.StatusBar = "Saving this workbook..."
.Save
Application.StatusBar = "Saving this workbook backup..."
.SaveCopyAs BackupFileName
OK = True
End With
End If
NotAbleToSave: Set awb = Nothing
Application.StatusBar = False
If Not OK Then
MsgBox "Backup Copy Not Saved!", vbExclamation, ThisWorkbook.Name
End If
End Sub
|
I want Excel to automatically backup a workbook on file close without prompts to the user. I found the excellent code below online (forgot source) but the backup FileType is changing to a BAK File that I cannot open.
How do I fix this problem. Both files will be in the same folder & the backup should have same file name & "-bak" or ".bak".
Sub SaveWorkbookBackup()
Dim awb As Workbook, BackupFileName As String, i As Integer, OK As Boolean
If TypeName(ActiveWorkbook) = "Nothing" Then Exit Sub
Set awb = ActiveWorkbook
If awb.Path = "" Then
Application.Dialogs(xlDialogSaveAs).Show
Else
BackupFileName = awb.FullName
i = 0
While InStr(i + 1, BackupFileName, ".") > 0
i = InStr(i + 1, BackupFileName, ".")
Wend
If i > 0 Then BackupFileName = Left(BackupFileName, i - 1)
BackupFileName = BackupFileName & ".bak"
OK = False
On Error GoTo NotAbleToSave
With awb
Application.StatusBar = "Saving this workbook..."
.Save
Application.StatusBar = "Saving this workbook backup..."
.SaveCopyAs BackupFileName
OK = True
End With
End If
NotAbleToSave:
Set awb = Nothing
Application.StatusBar = False
If Not OK Then
MsgBox "Backup Copy Not Saved!", vbExclamation, ThisWorkbook.Name
End If
End Sub
|
Backup on File Close Excel VBA
|
1
You can't achieve the intended using 7z.exe, because the way it works on cmdline is same as the way zip and tar works, you need to provide the archive names followed by the filename you want to zip. Its nothing like it works on gui.
But you can put following line in a batch file
7z.exe a -tzip "%~n.zip" "%1"
And call the batch file like
batch_file filename.bak
It will produce filename.zip
If you have to do it for many files then you can modify your batch file as follows
FOR %%I IN (*.bak) DO (7z a -tzip "%%~nI.zip" "%%I")
For this you have to go in a floder where bak files are, and run this, it will create .zip files
I hope one can try other required modifications around this solution
Share
Improve this answer
Follow
edited Apr 23, 2014 at 9:11
foxidrive
40.7k1010 gold badges5656 silver badges6969 bronze badges
answered Apr 22, 2014 at 19:09
PradyJordPradyJord
2,1441212 silver badges1919 bronze badges
3
But i have bulk of files which have to compressed each day.
– Vishnu Murali
Apr 23, 2014 at 4:30
@VishnuMurali Cant you modify the batch to suit your requirements? If not then please edit the question with EXACT queries.
– PradyJord
Apr 23, 2014 at 5:41
Try the last for in do command above. Add the path to the 7z.exe if needed.
– foxidrive
Apr 23, 2014 at 9:15
Add a comment
|
|
The problem I have relates to the file name being truncated on the first space in the file name.
7z a -t7z "D:\IDRIVE\New backups\Program\full\6\File.7z" "D:\IDRIVE\New backups\Program\full\4\*.*" -mx9
The above is the batch code i use ( Actually i use date instead of file,but i want to change to original name)
File1.bak becomes file.7z
File2.bak becomes file.7z
And if am having two files in a folder, the 7zip will compress
File1 and File2 and it becomes a single file named file.7z
I want it to be compressed separately as follows
File1 becomes File1.7z
File2 becomes Files2.7z
Please give me your valuable suggestions
But it's zipping file and also changing it's name to the date and time which it's getting zipped (It was my only option at that time). Actually i don't want to change it's original name.
And with this code the two files in a folder are compressed to a single file. I want them to be compressed separately
I want to know how to make it possible.. I am not very good in batch file programming
|
Truncated file name when compressing using batch file (7zip)
|
1
Most of troubles with things like "it works in my console but not with my cron job" are about variable environments like PATH. You should use absolute path to launch your script or be sure that it's in the PATH used by cron.
Share
Improve this answer
Follow
answered Apr 14, 2014 at 13:39
krantegkranteg
96111 gold badge77 silver badges1515 bronze badges
1
It works now. whenever and than whenever --update-crontab. Otherwise, it doesn't work for me. Actualy, I'm pretty sure I've tried even this at the beginning, but I won't complain now.
– enter08
Apr 14, 2014 at 13:59
Add a comment
|
|
I used the backup gem to make a backup of my database. (not so important here)
I want to schedule that on every minute (to test it for the beginning).
schedule.rb file:
set :output, "log/cron_log.log"
every 1.minute do
command "backup perform -t expense_backup"
end
At first I tried this in the backup Backup/config folder. I moved this into the schedule.rb of my app where I already have a scheduled task which works fine. There is also no output in cron_log file. The output of the whenever command for this task is this:
* * * * * /bin/bash -l -c 'backup perform -t expense_backup >> log/cron_log.log 2>&1'
EDIT:
if I write system insted of command and then try whenever, it performs the backup! The problem is that it doesn't trigger the task every minute.
|
How to schedule database backup?
|
Physical backup is a backup made with RMAN or copying datafiles on the OS. A logical backup is a term used for data dumps using tools like datapump, export. Technically a logical backup is no a backup because you can not do a point in time recovery.
A physical backup contains logical data (for example, tables or stored procedures) so in that sense it does.
http://docs.oracle.com/cd/B19306_01/backup.102/b14192/intro001.htm
|
What is difference between Oracle Full database Backup vs Physical/Logical?
Does Physical backup include logical?
I want to Backup Whole database what backup should i tell to DBA?
|
Oracle Whole Database Backup
|
The rules are rather simple:
When restoring a non-detached collection backup, you must restore to the same version. TFS will balk if they're not at the same level. An upgrade will change data structures and stored procedures.
When restoring a collection that has been detached you can restore and attach it to the same version or (most) newer versions (there are limitations, for example: 2012 rtm and 2012u4 can be attached to 2013, but TFS 2012u1, 2 and 3 cannot)
You cannot restore a backup from a higher version on an older version of TFS. You must first upgrade the target server.
|
We just learned this morning that in TFS 2012 Update 2 they integrated backups in TFS rather than you having to install the power tools to get backup functionality. Is there any compatibility picture between the backups created with update 1 & the power tools with the scheduled backup feature in TFS 2012 Update 2? Our production server is running 2012 update 1 and the other machine we are trying to restore is running 2012 Update 2.
We are trying to restore a backup of prod to this other server but when we choose "List Backups" in the Restore Database functionality of the Scheduled Backup tool it doesn't list anything when we point it to our production backup folder.
|
compatibility TFS Update 1 backups on TFS Update 2 backups?
|
you simply can:
clone you repo
overwrite its content with your backup local copy
ask git diff to display all the changes, add, and commit what you need to version.
|
My hard drive crashed recently and I went back to a recent backup of all my data.
However I had some changes in a local git repository that I had not committed before the failure. Now I want to bring everything back to where it was before the hard drive failure.
That is, I have a local copy of a folder that used to be connected to the git repository. However all the .git folders are missing. How can I synchronize the local copy with the repository, while keeping all the modifications that I had and did not commit to the repository?
|
synchronize a restored local copy of git repository
|
Like stated in the documentation the underlying API used (except for iOS 5.0.1) is NSUrlIsExcludedFromBackupKey and according to this answer it is recursive.
|
I would like to exclude a whole hierarchy of files and folders from the backup using
NSFileManager.SetSkipBackupAttribute()
The documentation is about "files". Can I also pass a path and all files and folders under that path - also those ones added afterwards - won't be backed up either or do I have to set the attribute on every individual file?
|
Is NSFileManager.SetSkipBackupAttribute() also working for (recursive) paths?
|
1
Your user ID (or any group containing it) is probably not contained in any role of the Analysis Services database that you restored, and you included the security information when restoring. If you do not have Analysis Services server administration rights, then you cannot access the database. You could access the database in Management Studio using a user id that has administrative rights, either on the database, or on the whole server (by default the local administrators group of the server computer has Analysis Services server administration rights), and change the rights (below the "Roles" node of the database in Management Studio) so that you can access the database.
Or, you could ask someone having administrative rights to restore the database again, this time unchecking "Include security information" in the dialog. You probably cannot do this yourself, as if you do not have administration rights to a database, you cannot overwrite it by the restore.
Share
Improve this answer
Follow
answered Jan 17, 2014 at 12:27
FrankPlFrankPl
13.2k22 gold badges1515 silver badges4141 bronze badges
Add a comment
|
|
Hullo everyone.
After restoring a backup (comprehensive of security information) of AdventureWorksDW cube I am unable to log into it using windows authentication.
SQL Server Analysis service version is 2005 (9.00.1399.6), installed on a Windows 2003R2 machine joined to a domain.
Shoul you need any further information please do not hesitate to let me know.
|
The connection failed because of an error in initializing provider - SSAS 2005 - restored DB
|
1
Sorry but I am sure you are using GUI and you were not carful in that even. Please use query over GUI for max work.
Now your issue, I am not sure if you already fix but as no reply on web
Now you lost your database, its of no use. So find a latest backup and restore db1 (you can use point in time recovery if you have t-log backups)
Restore db2 with 30 days old backup by query , use move option in restore or create new empty database first and then restore same with replace option.
Share
Improve this answer
Follow
edited Feb 24, 2014 at 8:21
Flexo♦
87.9k2222 gold badges194194 silver badges277277 bronze badges
answered Feb 3, 2014 at 9:02
Saurabh SinhaSaurabh Sinha
1,37311 gold badge99 silver badges1111 bronze badges
Add a comment
|
|
Let's say I have database named db1 on SQL Server. I have daily backup, and I wanted to restore month old backup to new database, so I can recover just some info.
I created database db2, and tried to restore the db1 backup to new and empty db2 database.
I got message that backup I selected does not contain db2 backup and it started to restore db1 itself!!!
So now, for quite some time, next to db1 there is message (restoring...)
How can I stop restoring db, I didn't wanted to restore db1 at first place, and that's why I choose to restore backup to db2 destination.
Is there any chance I still have today's db, not this one month old?
I still can't open db1 to see what I have there, because it's not accessible.
Thanks.
|
Restoring database on SQL Server went wrong
|
1
Backing Up the SVN repository server should be sufficient. One thing to remember is if the working machine goes down you would lose any configurations you have made to the working machine such as server configurations, system variables etc, so make sure those are documented. If you are running in a virtualized environment you can take a backup of the machine image, which would really speed up your recovery time.
As for svndump, That is exactly what it is used for. You can create a new repository and load the dump file and you will have your working code and everything you need.
I would suggest doing a dry run of a recovery using the dump file, that way you will be confident you are creating the dump file correctly and you won't be trying to figure out how to restore the dump file when an actual emergency is occurring.
Share
Improve this answer
Follow
edited Jan 3, 2014 at 16:50
answered Jan 3, 2014 at 13:56
mmilleruvammilleruva
2,1501818 silver badges2020 bronze badges
Add a comment
|
|
I have recently been tasked with implementing version control on our development systems. I created an SVN repository using TortoiseSVN by doing the following:
Installing the SVN server on the machine storing the code
Using the repository browser on that machine to import the source code into the repository
Having imported the code, am I right in thinking that there is no longer any need for the code that was stored there? No commits from working copies on other machines seemed to have changed that code, only the svn repository.
I am familiar with the concept that the SVN does not store the raw code simply, but rather the differences. For that reason, my question is: is backing up the SVN repository folder ON IT'S OWN sufficient as a backup strategy for this code?
We do also do a manual backup of the code on the working machine every month as a precaution, and I am considering writing a scheduled batch file to svndump the repository to remote drive daily. If I do the latter, and we, hypothetically, lost both the repository server and the working machine, would we be able to recover the code from this daly svndump?
Hope that makes sense; Thanks in advance.
|
Source code kept in SVN repository - what do I need to back up?
|
Have you checked your 'Volumes' path? Path names usually does'nt contain the server name. I would go for (if not (exists POSIX file "/Volumes/computer-backup/Web") then make new folder with properties {name:"Web"} at "computer-backup") – tompaman Dec 27 '13 at 13:35
|
At work, I want to backup some Mac files to a Windows share and have created an AppleScript. It mounts the destination then creates a folder if it doesn't already exist. It then copies the contents of a local folder to this new folder on the destination. It then unmounts the destination
mount volume "smb://service.backup:<password>@server.domain.com/computer-backup"
set dest to result as alias
tell application "Finder"
if not (exists POSIX file "/Volumes/server.domain.com/computer-backup/Web") then make new folder with properties {name:"Web"} at "computer-backup"
duplicate items of folder "Macintosh HD:Library:Server:Web" to POSIX file "/Volumes/computer-backup/Web" with replacing
eject dest
end tell
The mount is fine. But if the folder "Web" exists on the destination then it errors - despite the "if not (exists" statement. I have a very similar script at home (with different usernames, passwords and server addresses) which works fine. I am pretty sure I have had this working at work as well (hence the use of POSIX) but not anymore.
I chose this route as a more granular alternative to TimeMachine and to show my boss I could write AppleScript :>)
Any help gratefully received.
All the best
John
|
error "Finder got an error: The operation can’t be completed because there is already an item with that name." number -48
|
1
There are a host of desktop applications which allow you to browse usb-connected iOS devices.
iExplorer: http://www.macroplant.com/iexplorer/
i-FunBox: http://www.i-funbox.com/
iTools: http://www.itools.cn/en_index.htm
DiskAid: http://www.digidna.net/diskaid
I use IExplorer. There is a free demo version and an option to buy.
Share
Improve this answer
Follow
answered Jan 8, 2014 at 18:41
Gardner BickfordGardner Bickford
1,96411 gold badge1717 silver badges2525 bronze badges
Add a comment
|
|
It is possible to create .xcappdata like file without using xCode.
Actually Our user is facing problem with there iPod, So they want to restore the iPod and remove all the application but the application contain some important data. All data is stored in Document folder.
Because they don't have xcode so they are unable to create .xappdata.
So please help if anyone have alternative way of downloading document folder without using organizer.
|
How to create like .xcappdata without using xcode?
|
1
I think Shape should contain properties object, for example this.properties. In that object you should store all information about shape (it will be something like shape's model, only data, without any methods, or other internal class data). And in backup function you should backup only properties, not all shape's object.
(I'm a non native english speaker, feel free to correct my message if need)
Share
Improve this answer
Follow
answered Dec 7, 2013 at 18:51
Alex FitiskinAlex Fitiskin
77544 silver badges66 bronze badges
4
Are you saying this.properties returns an object with all the non-function properties? Because it does not for me.
– roboguy222
Dec 10, 2013 at 1:35
@roboguy222 I create a fiddle, check it out: jsfiddle.net/vCTze If it is not for you, tell me why, may be I can to find another solution.
– Alex Fitiskin
Dec 10, 2013 at 8:03
Oh, so you stored everything within the object in another object called properties. I guess that works, but it would be nice to not have to type properties every time.
– roboguy222
Dec 10, 2013 at 16:57
@roboguy222 I prefer this way, because it is similar to backbone's model (everything stored in the atrributes object).
– Alex Fitiskin
Dec 11, 2013 at 7:54
Add a comment
|
|
In javascript, I have an object (think of it as a shape), that can be put in edit mode and edited, or a not editable mode. When editable mode, I want to have a cancel button that cancels all edits and returns the shape back to its original form. I was hoping to use something like the following, but assigning things to 'this' doesn't work. What would the best way to do this be? I would prefer not to use external objects to store backups, because there could be many shapes and sorting out which backup corresponds to what adds code that is not as nicely packaged.
Shape.prototype.edit = function() {
this.backup = this;
...
}
Shape.prototype.cancelEdit = function() {
this = this.backup;
...
}
|
Creating a backup of 'this'
|
1
What does it mean: skipping collection: my_db.my_collection.$id, why id field? Does it mean some data wasn't dumped, or that there are no ID's in the backup (so while restoring the db, new ID's will be assigned?)
Collections with a $ are used by system namespaces (in this case the id_ index for your collection) and can safely be skipped. You're only seeing this informational message because you included the -v (verbose) option.
mongodump (2.2+) exports index definitions to a <dbname>.metadata.json file which will be used by mongorestore to recreate indexes when you restore the dump.
Strange is that mongo show dbs returns the size of my_db is about 1Gb, but the whole size of .bson files is just 150Mb?
By default MongoDB preallocates storage to prevent file system fragmentation and reduce delays when new data files need to be created. The allocated file size will be larger than the size of your data. Additionally, MongoDB allocates record padding for documents so that documents have some room to grow in place. The Storage FAQ in the MongoDB manual has more details.
If you run db.stats() for a database you should see both fileSize and dataSize values for comparison. The 150Mb of your BSON files should be close to the dataSize value, while the 1Gb in id_0 will be the id_1.
Share
Improve this answer
Follow
answered Dec 31, 2013 at 22:45
StennieStennie
64.6k1515 gold badges152152 silver badges179179 bronze badges
Add a comment
|
|
I tried to mongodump my db:
sudo mongodump -v --dbpath /var/lib/mongodb --out ~/backups/mongodb_dump/
but each collection had interesting output (it was verbose), there are some interesting lines:
Tue Dec 3 06:32:32 [tools] query my_db.my_collection nreturned:101 reslen:43408 0ms
Tue Dec 3 06:32:32 [tools] getmore my_db.my_collection cursorid:7364310293552401077 nreturned:8565 reslen:4194597 77ms
Tue Dec 3 06:32:32 [tools] getmore my_db.my_collection cursorid:7364310293552401077 nreturned:8053 reslen:4194704 75ms
Tue Dec 3 06:32:32 [tools] getmore my_db.my_collection cursorid:7364310293552401077 nreturned:7936 reslen:4194704 82ms
Tue Dec 3 06:32:32 [tools] getmore my_db.my_collection cursorid:7364310293552401077 nreturned:7932 reslen:4194524 83ms
Tue Dec 3 06:32:32 [tools] getmore my_db.my_collection cursorid:7364310293552401077 nreturned:9201 reslen:4194491 201ms
Tue Dec 3 06:32:33 [tools] getmore my_db.my_collection cursorid:7364310293552401077 nreturned:7253 reslen:3078796 544ms
49041 objects
Tue Dec 3 06:32:33 [tools] skipping collection: my_db.my_collection.$_id_
flickr-app-development-production.download_stats to /home/user/backups/mongodb_dump/my_db/my_collection.bson
What does it mean: skipping collection: my_db.my_collection.$_id_, why id field?
Does it mean some data wasn't dumped, or that there are no ID's in the backup (so while restoring the db, new ID's will be assigned?)
Strange is that mongo show dbs returns the size of my_db is about 1Gb, but the whole size of .bson files is just 150Mb?
|
Is it normal that mongodump shows `skipping collection: my_db.my_collection.$_id_`?
|
This is the default location at which the solution file is created however if you want to change the location of the solution file then try when you create the project simply uncheck the default box for "create directory for solution". This will create the File (.sln) in the same directory as the web project. Or you may start with a Blank Solution under "Other Project Types-->Visual Studio Solutions" in the New Project dialog after that add your website.
|
I just noticed that my one-project solution is in:
C:\Users\Clay\Documents\Visual Studio 2013\Projects\Platypus\Platypus.sln
...whereas the project itself is in:
C:\Platypus
What is the sense of separating things out that way? I did choose the project's folder, but not the solution's. I can see why the "buried" location would be used if I hadn't chosen a specific separate location for my project, but I would expect that choice to have put the project AND the solution in that folder.
Seems like a weird way to run a ship.
Is this normal? Any "gotchas" as far as backing it up goes? IOW, is backing up the project enough, or do I need to explicitly back up both?
|
Why would the solution and the project be in totally different locations?
|
Making a backup of the complete eclipse folder makes sense as it contains all the plug-ins which you have installed. Sometimes eclipse breaks because of a new plug-in or just any random problem. Then it's great to have a backup and you don't need to install every plug-in which you had before.
|
My "~/eclipse" folder contains the following subfolders: about_files, configuration, dropins, features, p2, plugins, readme.
Is it important to include this folder, or some of its subfolders, in the regular backup of my hard-drive? Or are they just standard files that come with the installation and can always be restored by re-installing Eclipse?
|
Should I backup the "eclipse" folder?
|
You don't need to backup these .svn folders. Besides of that, you don't really need to backup your working copies since you can always checkout them from the server again.
However, if your working copies are not large -- there is nothing against backing them up as all other data on your system.
|
I am trying to build a backup plan for my hard-drive, and I wonder whether I need to backup all the ".svn" subfolders?
Note: the repository is not on my computer - my computer contains only a working copy.
|
Should I backup the ".svn" folders?
|
1
Self-inflicted: the password contains a '$'. Escaping that with '\' fixes the issue.
Share
Improve this answer
Follow
answered Nov 8, 2013 at 15:48
Brian DunbarBrian Dunbar
16322 silver badges99 bronze badges
Add a comment
|
|
What I really want to do is move a mess of dataomic data from my lab hosts to my new staging hosts.
Lab is a computer in a closet in our office. Staging is our new hardware at a colocation site in the suburbs.
I think backup is the best way to handle this, but I'm open to other ideas.
I'm doing this from my transaction host in lab (credentials sanitized)
$ bin/datomic backup-db "datomic:sql://drone-develop?jdbc
:postgresql://[redacted]:5432/datomic?user=[redacted]&password=
bob+zazz@35szoonn_ZZQ" file:/tmp/backup
/tmp/backup is created.
Then the process blows up:
java.util.concurrent.ExecutionException: org.postgresql.util.PSQLException:
FATAL: password authentication failed for user "datomic"
...
Caused by: org.postgresql.util.PSQLException: FATAL: password authentication
failed for user "datomic"
...
The port is open between transaction host and the db server (redacted).
Using psql I can login with those credentials, from db host and local workstation.
Log files
datomic-pro-0.8.4020/log/2013-11-07.log
...
2013-11-07 21:37:00.121 INFO default datomic.slf4j.bridge - SLF4J Bridge installed
2013-11-07 21:37:02.305 INFO default datomic.kv-cluster - {:tid 10, :pid 7864,
:event :kv-cluster/retry, :StorageGetBackoffMsec 0, :attempts 0, :max-retries
20, :cause "org.postgresql.util.PSQLException"}
|
Datomic Backup is failing and I don't know why
|
1
Damn, I found the problem, there is a problem if the lockfile is owned by another user than rsnapshot runs from...
Share
Improve this answer
Follow
answered Oct 23, 2013 at 11:12
user2693017user2693017
1,79066 gold badges2424 silver badges5050 bronze badges
Add a comment
|
|
I have to pause rsnapshot from running some backups some times, so I created a lockfile for this time:
cat > /var/run/rsnapshot/rsnapshot.pid << EOF
$$
EOF
sleep 120s
But rsnapshot tells me "removing stale lockfile" and goes on with its backup.
What do I have to do that the lockfile isn´t stale for rsnapshot ?
The man tells this:
If a lockfile exists when rsnapshot starts, it will try to read the file and stop with an error if it can't. If it can read the file, it sees if a process exists with the PID noted in the file. If it does, rsnapshot stops with an error message. If there is no process with that PID, then we assume that the lockfile is stale and ignore it unless stop_on_stale_lockfile is set to 1 in which case we stop.
That would mean, it shouldn´t be stale as long as the bashscript runs. But it doesn´t work this way.
Edit:
Damn, I found the problem, there is a problem if the lockfile is owned by another user than rsnapshot runs from...
|
Removing stale lockfile - rsnapshot doesn´t like my lock files
|
1
I magically found that running:
call_command('dbbackup', clean=True, compress=True, interactive=False)
works perfectly.
Share
Improve this answer
Follow
answered Oct 22, 2013 at 10:28
poiuytrezpoiuytrez
21.7k3535 gold badges117117 silver badges179179 bronze badges
Add a comment
|
|
I have the command :
./manage.py dbbackup --clean --compress
provided by the django-dbbackup app which performs a backup of my PostgreSQL database to Amazon S3. I am trying to run this command inside a django celery task run daily.
When I run:
from django.core.management import call_command
call_command('dbbackup --clean --compress', interactive=False)
I am getting an exception because of the clean and compress arguments.
Any ideas on how I can run this command?
|
Running python manage.py command from django with arguments
|
From the Net::FTP documentation:
rmdir ( DIR [, RECURSE ]) Remove the directory with the name DIR . If
RECURSE is true then rmdir will attempt to delete everything inside
the directory.
You don't need to worry about the prompt thing. Just use $ftp->rmdir($dir, 1) and it will delete the dir including everything in it.
|
I am trying to delete a non-empty directory via FTP using a Perl script. In order to do this I first need to remove contents inside this directory and then delete directory.
In FTP you need to disable prompt to do this. Otherwise it will keep asking for confirmation on deleting every file.
ftp> prompt
Interactive mode off.
ftp> mdelete 2013-10-01-full/*
ftp> rmdir 2013-10-01-full
How can I turn prompt off in Perl. There is no such feature listed in Net::FTP. I even tried $ftp->prompt;
|
How to turn off Interactive mode in FTP (perl)
|
Response from SQL Azure team:
Hello,
We have investigated the issue and should have a fix soon. In the
meantime I would recommend creating a filter for these emails. Once
the fix is rolled out you can remove the filter (I will respond back
to this thread once it is).
If you do not want to receive any failure emails you can opt-out
permanently via the Unsubscribe link at the bottom of the email. Note,
however, that this is permanent and will apply to all export failure
emails: once you have opted out there is no way to opt back in later.
Apologies for the inconvenience,
-Stephen
|
UPDATE:
I was able to contact the SQL Azure team and they are prioritizing the bug. Hoping for a solution soon and will update here when I hear back.
While testing the new automated Export function in SQL Azure, I set a test database to backup nightly. When this test was complete, I deleted the test database, but the automated export task still attempts to run nightly and floods my team with emails regarding the failure of this orphaned database. Is there any way to delete the export job, or perhaps at least suppress the bogus alert?
How to repro:
Create a blank SQL Azure DB
Turn on automatic export.
Wait 48 hours to see a couple of exports successfully occur
Delete the test database created in step 1
Desired result: Export task is deleted/disabled and does not attempt to make an export nightly. No alert emails are sent.
Actual Result: Export task on the deleted database is attempted nightly and a failure email is sent to my team nightly.
|
SQL Azure automated backup fails on a DB which has been deleted - cannot remove backup task
|
You can restore the .trn files using RESTORE LOG (WITH NORECOVERY).
You need to do a full backup & restore (WITH NORECOVERY) of the database before you can restore any transaction logs.
If you can setup a VPN, it's best to let SQLServer do the whole backup & restore & verify - the copy phase is pretty easy to override - just write your own copy SQLAgent job & disable the copy job that SQL setup for you.
|
I need to setup a Disaster Recovery. I need to setup Log shipping to remote server not in same network. I can transfer log shipping trc files to remote machine through internet. But how to import trc files to secondary server. Actually i am trying to logship to a remote server for backup copy of database.
|
sql server log shipping to remote server in different network
|
This is not possible. At least, you won't be able to use the backup to restore on an older version.
Nenad Zivkovic provided a good link and there are several ways out of this situation listed:
Upgrade the older version of SQL Server to be at the same level as the newer SQL Server.
Script out the objects from the newer database and then usp a bcp process to extract the data from the newer database and import it into the older database.
Use the SQL Server Import and Export Wizard to build an SSIS package to move the data (it will move the data only).
Build a custom SSIS package to do the data move.
Use replication to move the data from the newer database to the older one.
Use some other form of scripting, such as with PowerShell, to keep the databases in sync.
(Source: http://www.mssqltips.com/sqlservertip/2675/why-cant-i-restore-a-database-to-an-older-version-of-sql-server/)
|
I've created a Backup of a Database on a server with SQL Server 2008 R2.
I wish to restore this onto a server that is running SQL Server 2008.
I've received the error:
"The database was backed up on a server running version 10.50.4000.
That version is incompatible with this server, which is running
version 10.00.5500."
Is it possible to produce a version 10.00.5500 compatible backup from 10.50.4000.
If not what other options do I have, or other ways to create the database.
I have tried to use the Copy Database task, but also received errors.
|
Create SQL Server 2008 Backup from 2008 R2
|
Linux uses advisory locks, so nothing actually prevents you from modifying a file that's being read/written by another process. If your programs lock the files they are working on, one of the two will complain about the file being opened by some other program.
What usually happens when a file is concurrently modified is data corruption.
Anyhow, this is quite rare, because files are seldom modified. What most commonly happens is that the original file is removed/truncated, and a new one is added in its place.
When a file is removed, Linux assigns a new inode to the new file, so, the old file remains accessible by its previous inode.
When a file is truncated, it should keep the same inode (I'm unsure, though). Anyway, if some other process was accessing the file, it will get an I/O error, because it was at location X, and when it tries to read location X+1 it gets an error, because the file has now a 0 length, a X+1 is out of range. By examining the situation, a program can determine that the size of the file has changed, which means it's being concurrently modified.
To summarize, on Linux, synchronization of I/O operations is responsibility of the single processes, which can ask the OS for a help, but they are not forced to.
|
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 10 years ago.
Improve this question
I ask this question to know how careful I need to be about accessing and editing files while I am running a backup on my Linux machine. What happens to the compression process (specifically zip) or the files if I open or edit them while they are being compressed?
Update: Just barely I removed a file while it was being zipped. Zip quit working on that file immediately and warned me that the file's size changed.
|
What happens if a file is edited while being compressed by zip? [closed]
|
I just solved my problem! The solution was at the time of registration of the image. It was necessary to convert it to base64_encode. With this PHP script I could generate an INSERT correctly, precisely because the string does not contain any special characters.
|
I am developing a system where the user can register and search for images. These images are saved in a table that has a field MEDIUMBLOB.
The problem I have starts with the fact that the system must also make a backup of the database, that is, it will also need to export and import data.
When I do the backup, for example, a table which only has "regular" fields (such as varchar, int, date) the PHP script can perform the backup normally, but when I try to back up the table where the field type is MEDIUMBLOB, it returns an error.
Is there a way to backup the database using PHP, for a table containing a MEDIUMBLOB field?
|
How to import / export database with field MEDIUMBLOB?
|
1
use mysqldump
an example of what you want
mysqldump --user=dbuser --password --tab=~/output/dir --all-databases
that will put a seperate sql file in ~/output/dir for all databases which user has read and lock access to
a full list of commands found here
http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html
Share
Improve this answer
Follow
answered Aug 5, 2013 at 20:21
exussumexussum
18.4k88 gold badges3232 silver badges6666 bronze badges
1
3
Thanks for your help.. I am getting this error.. --databases or --all-databases can't be used with --tab .. and Can I save it directly on my windows PC rather then storing it to server and then download?
– S Park
Aug 6, 2013 at 14:30
Add a comment
|
|
I have looked everywhere.. but not able to find any perfect solution (free) where I can download backup of all databases->tables individually so I can restore any table anytime I want and I do not have to restore whole database..
I am using mysql database, I have like 10-15 databases and 100+ tables in each.
I can see "Auto backup for mysql" does that. but its almost 100 bucks.. I want to know if I can do with terminal or any other tool.. I want to do backup everyday.. and want to have each table.sql file with the date and time..
Is it possible??
|
backup each tables of all databases and save it to local directory?
|
You can try to use Jena it supports persistent ontologies.
Also, you need to decide in what format you will store your ontology (XML, JSON, etc), then for example, the backup method can create an XML out of every semantic entity. You can use JAXB/XStream/gson to achieve that (Java to XML/JSON).
Good luck!
|
I'm working on a dynamic system that uses a not too big ontology, to make correct decisions based on received information. I need to back up this ontology, together with its individuals so that the system can be restored after failure, but I don't know the ontology, nor how many individuals it contains, so the backing up needs to be as generic as possible.
I would prefer that one function could be called, just to signal my backing up part of the code can do its thing, instead of demanding from the ontology code to call a method for each seperate ontology or individual.
Using the OWL API, is this possible? Can I back up my system in a generic way?
|
How to: Backing up a Ontology?
|
I dont mean to sound snarky, but older versions of mysql can be obtained. Why not install a copy of the older version of mysql, export to SQL using mysqldump, then reimport into a newer version.
Perhaps use a virtual machine inside Virtualbox.
|
The mySQL manual says that backup and restore are deprecated, and removed in version 5.5.
I have ISAM tables dumped with "backup" from an earlier version (5.0) - how can I restore these to a 5.5+ mySQL database.
|
Mysql how to backup and restore after version 5.5
|
1
Look for this section in /etc/rsnapshot.conf file:
# If your version of rsync supports --link-dest, consider enable this.
# This is the best way to support special files (FIFOs, etc) cross-platform.
# The default is 0 (off).
#
#link_dest 0
Make sure the "link_dest" is disabled. This is used as a flag when rsync command is called in the background. As per the man page for rsync:
--link-dest=DIR hardlink to files in DIR when unchanged
Share
Improve this answer
Follow
answered Nov 21, 2013 at 1:56
naugustinenaugustine
31333 silver badges99 bronze badges
2
If anything --link-dest should be enabled not disabled (its disabled by default). By default rsnapshot will use cp -al <backup1> <backup2> for rotation on the encfs filesystem, which the OP is saying will break because of encfs settings. Enabling --link-dest in my testing seems to use mv <backup0> <backup1> instead, which may fix the issue.
– spinkus
Jun 5, 2014 at 22:39
I tried this and now I am getting failed to hard link :/ any ideas?
– shaneonabike
Sep 15, 2015 at 14:28
Add a comment
|
|
I'm using Rsnapshot to backup all my servers on an EncFS encrypted partition. The partition has been created with the default paranoia mode offered by EncFS, thus it doesn't support hard links.
I'm able to run Rsnapshot the first time (creating daily.0, weekly.0, monthly.0) but not the second time.
Is there a way to use Rsnapshot without the hardlinking feature? I know it sounds a bit silly, but my rsnapshot.conf is very well configured and I don't want either to switch to another software or erase and recreate the EncFS volume.
Thank you
|
Rsnapshot without hard links?
|
1
I have uploaded a sample app that provides backup and restore capabilities a number of different ways, including local backups, copy backups to and from iCloud, email backups, import from email, and file copy via iTunes. See link below for video demonstrating these capabilities and you can download the sample apps from the site.
http://ossh.com.au/design-and-technology/software-development/sample-library-style-ios-core-data-app-with-icloud-integration/sample-apps-explanations/backup-files/
EDIT
It should be safe to create a new persistentStoreCoordinator with the same fileURL and to then use migratePersistentStore API without closing the App, save the main MOC first though. I always use JOURNAL=DELETE mode to ensure I just have to use a single file to deal with. If you are using WAL mode then you would need to backup all three files used by sqlite.
Share
Improve this answer
Follow
edited Mar 7, 2014 at 21:10
answered Mar 4, 2014 at 2:32
Duncan GroenewaldDuncan Groenewald
8,69866 gold badges4343 silver badges7777 bronze badges
Add a comment
|
|
I would like to make backup copies of my app's main sqlite DB while my app is running.
1) I have read that it is safe to just copy the sqlite file if the DB has been checkpointed (at that point the wal file contains no important data). Does [managedContext save:] do that checkpointing, or is there something else I have to do? (ref -shm and -wal files in SQLite DB)
2) Is there any way, short of tearing down the whole core data stack, to be sure that core data doesn't try to write to the sqlite file while I'm copying it? My app does save frequently after any user input, and it would be nice if there was some way to force that to block for a second.
|
Backing up sqlite DB on iOS
|
it´s very basic script, these are the steps:
Install s3cmd from official site
Invoke s3cmd in your shell and add your AWS credentials
Create a new S3 bucket to store the files, better than Glacier
In your bucket properties set lifecycle to autoremove them after 90 days
Go back to your shell and paste the script below
Now create a log file /var/log/my_backups_to_s3
Give permissions to the script to execute and the log file
Try it with ./your_script_name
EXAMPLE SCRIPT TO MYSQLDUMP AND PUSH IT TO S3
#!/bin/bash
MY_FOLDER="/__PATH_TO_WRITABLE_FOLDER__/"
NOW="`date +%Y-%m-%d-%R`"
FILE=$MY_FOLDER"___FILE_NAME___"$NOW".sql"
mysqldump -h localhost -u USER_DB -pPASSDB -c --add-drop-table --add-locks --quick --lock-tables DBNAME > $FILE
s3cmd put $FILE s3://___YOUR_BUCKET_NAME___
echo "`date -u`" "BACKUP DONE - MySQL uploaded to hipespace ^^ -> ".$FILE >> /var/log/my_backups_to_s3
NOTES:
There is no space between -p and PASSDB
I save the file before pushing to S3, this is not mandatory
Bucket need about 30 min to be fully available, before that I had some connection errors
anyway this is only to backup the db, give a try to s3cmd docs, they have another command "sync" that you can user to push your images,
hope it helps
|
I have moved my first site to a EC2 micro instance, now the project is working I am trying backup database and images folder, if possible inside Amazon (Glacier? S3?).
I have read a lot about it, but I am sure anyone has scripted this before.
Stack:
- Ubuntu Server 12.04 LTS
- Apache 2.2.1
- PHP 5.4.4
|
Backup database and web folder in EC2
|
I've recently had trouble using mysqldump. My locales and client and server and table charsets and everything that could posibly be set to use utf8 charset was set to use it and still I was getting rubbled ASCII output on mysqldump, which lead to errors when importing because of all the ??·$^"·???. mojibake input. My solution (hope it works for you):
export:
mysqldump -u USER -pPASS -r db.sql db
import:
mysql -u USER -pPASS db
MYSQL [db]>SOURCE db.sql
Also solutions like percona xtrabackup may seem overkill at first glance, but this in particular at least works really good and the basic usage is really simple, the tool is GPL licensed and you don't need to worry about non exact replications, because it copies the binary database files as they are without generating commands that are supposed to make a database like yours but then they won't...
|
I have a MySQL that that primarily has innodb tables. I did back up using mysqldump, phpmyadmin, and by saving the files in /var/lib/mysql.
When I try to restore them now, all the tables are restored except for a table that contains "URLs". The information in that table is not complete. I tried restoring the three types of backup and it's the same. (The URLs are stored using UTF8)
Any idea what did that happen? Is there a chance that mysqldump doesn't work always as expected?
Do you think that there is a way for me to restore my "URL" data?
|
Backup/Restore issue with mysqldump (table with URLs)
|
Timestamps are obviously global, thus a snapshot ID only needs to be a single timestamp. In SQL Server, for example, you can run SELECT CURRENT_TIMESTAMP to get the current timestamp.
When you want to export, run individual queries on each table to export the rows that have a timestamp between the last exported and the current one. If the timestamp fields are indexed, each of these queries should be quite fast, obviously dependent on the amount of data to be exported.
Assuming you run these exports while other updates on the database can occur, it's important that you only get the current timestamp once, store that as a variable and work with that value (as opposed to using e.g. CURRENT_TIMESTAMP freely), otherwise some data will go missing occasionally.
You may want to consider having a Deleted column flag on each table and updating this rather than deleting rows, so you know which rows were 'removed'.
|
For a given table in my system of record (RDBMS), I need to implement a functionality to export the records incrementally. For example, if a user runs an export job which returns x number of records, I want to return a snapshot id back to the user. For the next export job, user will pass that snapshot id to me and using that I should be able to export only the records that have either been modified or added since. Ideally I would like my snapshot ids to be re-usable. In other words, I do not want my snapshot ids to expire, but this is not a hard requirement.
Given that I have LAST_UPDATE_DATE (Timestamp) column in all my tables, what's the best way to solve this problem?
I am not looking for code, tools or commands. I am just looking for the logic of how I should generate this snapshot id and recognize it in subsequent calls to perform an incremental export of records in a given table.
|
Incremental export records from a database table
|
1
I have the same problem and I have solved with rewriting the Backup and Restore methods by using the Smo libraries.
Examples:
Backup Example
Restore Example
By the way I'm doing the following in case Partial backup or restore.
if (partial)
{
sqlRestore.Partial = true;
sqlRestore.ContinueAfterError = true;
}
I hope this will help you.
~Bassam
Share
Improve this answer
Follow
answered Apr 22, 2013 at 16:52
Bassam AlugiliBassam Alugili
16.7k77 gold badges5353 silver badges7070 bronze badges
Add a comment
|
|
I have a SQL Server 2008 database which has filestream.
I need to backup this database excluding filestream and restore the database on SQL Server 2012.
I have researched this and found:
http://social.msdn.microsoft.com/Forums/en-US/sqldisasterrecovery/thread/bcd5dddf-5a66-42a9-acf4-a63136f3658a
This works when I am backing up the database without filestream and restoring on to SQL Server 2008, as per instructions of the URL.
However, when I run the scripts to restore to 2012, i get the following errors:
Msg 3634, Level 16, State 1, Line 3
The operating system returned the error '3(The system cannot find the path specified.)' while attempting 'FindFirstFile' on X
Msg 5520, Level 16, State 1, Line 3
Upgrade of FILESTREAM container ID 65537 in the database ID 10 failed because of container size recalculation error. Examine the previous errorlog entries for errors, and take the appropriate corrective actions.
Msg 5056, Level 16, State 6, Line 3
Cannot add, remove, or modify a file in filegroup 'FileStreamFileGroup' because the filegroup is not online.
Msg 3013, Level 16, State 1, Line 3
RESTORE DATABASE is terminating abnormally.
I think it has something to do with SQL upgrading the database from 2008 to 2012.
Any ideas? Any help would be much appreciated.
|
SQL Server 2012 restore excluding filestream
|
1
Check your log file size using lenght().Then check if its bigger then 5mb call extendLogFile() func.
This is c# code u can easly convert to java
Size check:
if (size > 400 * 100 * 100)
{
extendLogFile(Path);
}
Copy old log file in archive directory and create new log file:
private static void extendLogFile(string lPath)
{
string name = lPath.Substring(0, lPath.LastIndexOf("."));
string UniquName = GenerateUniqueNameUsingDate(); // create a unique name for old log files like '12-04-2013-12-43-00'
string ArchivePath = System.IO.Path.GetDirectoryName(lPath) + "\\Archive";
if (!string.IsNullOrEmpty(ArchivePath) && !System.IO.Directory.Exists(ArchivePath))
{
System.IO.Directory.CreateDirectory(ArchivePath);
}
string newName = ArcivePath + "\\" + UniquName;
if (!File.Exists(newName))
{
File.Copy(lPath, newName + ".txt");
using (FileStream stream = new FileStream(lPath, FileMode.Create))
using (TextWriter writer = new StreamWriter(stream))
{
writer.WriteLine("");
}
}
}
Share
Improve this answer
Follow
answered May 8, 2013 at 12:46
MennanMennan
4,4611313 gold badges5454 silver badges8989 bronze badges
Add a comment
|
|
How to take automatically backup of a log file(.txt) when it's size reached a threshold level, say 5MB. The backup file name should be like (log_file_name)_(system_date) and original log file should be cleaned(0 KB).
Please help. Thanks in advance.
|
how to take log file backup automatically
|
I would just use the mysqldump command for that. Maybe you could even call it from whatever program you are writing. The syntax is simple:
mysqldump -u <user> -h <host> -p<password> <dbname> > <filename>
this will just put the whole database into that file as an sql script.
|
I need to make a database backup of mysql, but it should have certain features.
The backup should execute on a click of a buttton in java swing.
Path : The path should be declared by me where it is saved (e.g c://backup) but The path should be such that there should be no saving problem if I provide the jar file for my java to some other users, which path is best to use then?
Name : The name will be made up by 3 strings, one is a basic string, other is a Datetime format string followed by 3rd a string again. (e.g. FileName-2012-02-12 09:00:00-Name).
How can I achieve this? Can someone guide me please
I tried to code using mysqldump but still couldnt get it done.
String dbName = "dth";
String dbUser = "root";
String dbPass = "root";
String executeCmd = "";
executeCmd = "mysqldump -u " + dbUser + " -p" + dbPass + " " + dbName + " -r C:\\backup.sql";
Process runtimeProcess = Runtime.getRuntime().exec(executeCmd);
int processComplete = runtimeProcess.waitFor();
if (processComplete == 0) {
System.out.println("Backup taken successfully");
} else {
System.out.println("Could not take mysql backup");
}
} catch (Exception e) {
System.out.println(e.getMessage());
}
I tried the above code but I couldn't get it done, keeps giving me
CreateProcess error=2, The system cannot find the file specified
error.
I have tried it by creating a .sql file on the location and still I get the same error.
|
How to create a MYSQL backup with custom names and information
|
It appears you didn't understand what FreeFileSync documentation was trying to tell you. The scripts themselves are not like .bat files, i.e., they are not self-executable. Ubuntu has no idea what to do with the file that you're double clicking. They need to be passed into the command line prompt for FreeFileSync in some pre-defined interface. Probably something like:
./free_file_sync myBatch.ffs_batch
I'm not sure what the executable is called, but it's probably something along those lines.
|
How can I execute a FreeFileSync batch script on Ubuntu?
I have set up a batch job, saved in a file with extension ".ffs_batch". What now? If I double-click on it, there is no file association.
I am familiar with executing .bat files on Windows by double-clicking, but I'm on Ubuntu. I expect there to be a command-line like: sudo something batchjob.ffs_batch. Or do I need to set the correct file association before executing? Or write yet another script to execute the .ffs_batch files.
I am on kUbuntu 10.10. I installed FreeFileSync via the program uploader (so may not be the absolute latest version)
Needless to say I do have access to Google and the Help files. There is plenty of info on what to do under Windows, but I couldn't find the relevant explanation for Linux.
Here is all the info I have at this time:
FreeFileSync Help
Batch Scripting
FreeFileSync can be called from command line and supports integration into batch scripts. This section gives some general hints and examples for Windows *.cmd and *.bat scripts. When FreeFileSync is started in batch mode (a *.ffs_batch file is passed as argument) it returns one of the following status codes:
|
linux/ubuntu batch scripts with FreeFileSync
|
1
In addition to limits.conf, you need to do the following:
Edit /etc/pam.d/login, adding the line:
session required /lib/security/pam_limits.so
please try it and let me know of the result.
Share
Improve this answer
Follow
answered Feb 5, 2013 at 5:19
mostafa.Smostafa.S
1,47244 gold badges1717 silver badges2727 bronze badges
3
it's still occured same error. "session required /lib/security/pam_limits.so" add this line, what should i do?
– lng0415
Feb 5, 2013 at 7:12
of course you should stop your application, log out, and log in again for your changes to take effect, and then start your application again. of course about RHEL I know it works, however for CentOs I'm not sure, a system restart might be necessary. anyway let me know of results. :)
– mostafa.S
Feb 5, 2013 at 7:17
I'm sorry. my cassandra's data reomved all. my mistake. so I can't test "too many file open" error. T.T If i see same error, I will retry your advice.
– lng0415
Feb 5, 2013 at 8:41
Add a comment
|
|
java error occured when I try to cassandra snapshot.
root@cassandra mytest]# /usr/local/apache-cassandra-1.1.7/bin/nodetool -h localhost mytest
so, I added to /etc/security/limits.conf
follow this: http://www.datastax.com/docs/1.1/troubleshooting/index#java-reports-an-error-saying-there-are-too-many-open-files
soft nofile 32768
hard nofile 32768
root soft nofile 32768
root hard nofile 32768
soft memlock unlimited
hard memlock unlimited
root soft memlock unlimited
root hard memlock unlimited
soft as unlimited
hard as unlimited
root soft as unlimited
root hard as unlimited
but still error occured.
please hel me.
I use CentOS.
ps.
I following this:
On CentOS, RHEL, OEL Sysems, change the system limits from 1024 to 10240 in /etc/security/limits.d/90-nproc.conf and then start a new shell for these changes to take effect.
* soft nproc 10240
but i can't find "/etc/security/limits.d/90-nproc.conf"
i'm sorry. i'm poor at english.
|
Too many open files occured when cassandra snapshot
|
I write my own HostBackup Faraday middleware for this case.
You are welcome to use it! https://github.com/dpsk/faraday_middleware
Here is an article: http://nikalyuk.in/blog/2013/06/25/faraday-using-backup-host-for-remote-request/
|
In my project i'm using following small library for interacting with the external serivce:
class ExternalServiceInteraction
include Singleton
URL = Rails.env.production? ? 'https://first.production.url.com' : 'http://sandbox.url.com'
API_KEY = Rails.env.production? ? 'qwerty' : 'qwerty'
DOMAIN = Rails.env.production? ? 'prod.net' : 'stage.net'
def connection
conn = Faraday.new(url: URL) do |faraday|
faraday.response :logger # log requests to STDOUT
faraday.adapter Faraday.default_adapter # make requests with Net::HTTP
end
end
def return_response(item=true)
if @resp.status == 200
response = item ? Hash.from_xml(@resp.body)['xml']['item'] : Hash.from_xml(@resp.body)['xml']
else
response = Hash.from_xml(@resp.body)['xml']['error']
Rails.logger.info "response_error=#{response}"
end
response
end
def get_subscribers
path = 'subscribers'
data = { 'X-API-KEY' => API_KEY, 'domain' => DOMAIN }
@resp = connection.get(path, data)
return_response
end
def get_subscriber(physical_id)
path = 'subscriber'
data = { 'X-API-KEY' => API_KEY, 'Physical_ID' => physical_id, 'domain' => DOMAIN }
@resp = connection.get(path, data)
return_response
end
# and so on
end
Now i want to use 'https://second.production.url.com' if the there are any error with interacting service via first url, how will be better to setup this?
At first i tried to ping / get 200 ok from server and if this fails, than i switch to the second URL. But there are situations when server is up and running, returns 200 OK, but API isn't reachable. My mains issue is that i don't see how i can catch error and re-run method with another URL from the library.
|
Use another source if external service is down
|
It depends on what you mean by "continuous". If you want a copy of the database running that is always the same as the main database, you will need to set up "replication" - see http://dev.mysql.com/doc/refman/5.1/en/replication.html for how to do that.
If you want a database backup that is relatively current, then running mysqldump every hour or so is a pretty good solution.
You'll need to backup the files separately, because they are in your file system not the database. Look at running rsync every hour or so.
Why do you want a "continuous" backup and how would you use it? Do either of these approaches answer your question?
|
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I am administrating mediawiki for my organisation. We use it as our Intranet site. It has accumulated a huge organisational knowledge base. I have make sure that mediawiki is always up and running. Knowledge base always backed up.
Is there a way to take continuous back of mediawiki files and databases? My mediawiki is hosted on LAMPP server with Debian OS.
I am trying to find a way to automate backup process.
|
Continuous Backup of Mediawiki [closed]
|
You can try to listen for filesystem change and see if you are responsible of those or not. inotify framework is here to help you in such task. inotify is a userland api. see wikipedia... http://en.wikipedia.org/wiki/Inotify
|
I'm now implementing a file system back-up and restore program under Linux. The requirement is that all the operations must be taken online.
My problem is, currently, the program has no sense of the state of files to be restored.
So it is possible that some file is being edited by other application when restoration occurs, in which case, modification on the file may be overwritten by backup.
One solution that I can come up with is testting whether the file is opened by other applications before restoration and postponing restoration to the time when the file is closed. However, to test the open state of a file, I think I should traverse the /proc file system, i.e. checking all the running process and get an open file list for each process, which is time costy.
Is there a better or classic solution to this problem? Any hints will be highly appreciated.
Thank you and Best Regards.
|
How to restore the snapshot of a file system with service online?
|
1
It's only part of an answer, but rumor has it you can tell Windows to keep spooled documents (right-click the printer, choose "Printer Properties", Advanced, "Keep Printed Documents").
You could enable this, and then create a scheduled task (or system service, etc.) that watches the spool directory and moves all files older than a certain threshold to a more appropriate location for further processing. (The age threshold would be a reasonable way to avoid trying to move files that are currently being written.)
Then you'd have to find a program to convert the .spl files to whatever format you like, or try interpreting it yourself. It looks pretty low-level but Microsoft does offer some documentation about the MS-EMF and MS-EMFSPOOL formats that might be a start.
Share
Improve this answer
Follow
answered Dec 10, 2012 at 19:59
GargantuChetGargantuChet
5,75911 gold badge3131 silver badges4141 bronze badges
1
Thanks for your prompt reply. That looks a bit too low-level, namely a lot more complex for parsing, since XPS has basically XML for each page along with the embedded images and fonts, that would probably the easiest format (time/budget constrains).
– Daniel
Dec 11, 2012 at 10:29
Add a comment
|
|
What I'm trying to accomplish is to always keep a parsable duplicate of all printed documents, and execute a secondary process for each print.
(i.e.: Be able to parse all text, account for pages, vectors, images, etc).
Processing the document can either be done immediately or deferred (immediately is desirable).
As formats go, any PDL might be suitable, my best guess is XPS would probably be the best bet for a parsable format, any recommendations for other formats are appreciated.
Ideally, I'd like to not mess with the user interaction with the printing (e.g.: print settings page; or create a virtual printer, which could save a XPS and then forward the print job to the physical printer).
Since users might not be tech savvy to either set up/use it properly and/or mess up the process at a later date.
What I'm looking for at this time:
Documentation on the print process and flow (WDK, PDL, what else?)
How this could be accomplished, if at all possible; are there any existing solutions?
Any directions into what I should be looking at.
|
Store parsable backup of all printed documents
|
1
Use exec function to call mysqldump, a backup utility program that's bundled with every MySQL database. Pass the folder you would like the file (it will be a .sql file) to be put, and you can simply download or ftp it from there.
A simpler option is to install PHPMyAdmin and you can use it to backup any database and table.
Hope this helps.
Share
Improve this answer
Follow
answered Nov 22, 2012 at 20:15
Kneel-Before-ZODKneel-Before-ZOD
4,16111 gold badge2424 silver badges2727 bronze badges
Add a comment
|
|
Hello everyone I am pretty new to php. I am trying to create a back up of my sql database and I want the back up to run on php server at the click of a button. I found some templates the issues I am having and it may be due to the wrong templates are: On most of the templates I don't where I should enter the information that is needed to make it work, for ex( host, dbname) etc another issue is I don't know if that's the only part of the code that needs to be changed. If someone could help me find a backup template and tell me step by step how to get my database to export as a file I would greatly appreciate it!sq
|
Back Up My SQL database in PHP
|
A loop to do that is probably trivial enough to not need a separate one-line command. You don't need to save a portion of the filename since you're just adding .SAVE to the whole thing:
for fspec in *_test.xml; do
cp "${fspec}" "${fspec}.SAVE"
done
And, in any case, you can do it in one line if you really want:
for fspec in *_test.xml; do cp "${fspec}" "${fspec}.SAVE" ; done
|
I have files: x0001_test.xml z0054_test.xml k5487_test.xml....
I would like to save them doing something like: cp *_test.xml ${BEGINNING}_test.xml.SAVE.
Is there a way in bash script to store the content of * for each file in order to re-inject it after? Or should I use a loop?
|
Make a backup copy of several files matching a pattern
|
To send only zip files that were created today:
MPUT_ZIPS="$(find $BACKDIR -iname '*.zip' -type f -maxdepth 1 -mtime 1 | sed -e 's/^/mput /')"
[...]
$LFTP << EOF
open ${FTPUSER}:${FTPPASS}@${FTPHOST}
mkdir $FTPDIR
cd $FTPDIR
mkdir ${TODAY}
cd ${TODAY}
${MPUT_ZIPS}
cd ..
rm -rf ${RMDATE}
bye
EOF
Hope this helps =)
|
I'm writing a bash script to send backups to a remote ftp server. The backup files are generated with a WordPress plugin so half the work is done for me from the start.
The script does several things.
It looks in the local backup dir for any files older than x and deletes them
It connects to FTP and puts the backup files in a dir with the current date as a name
It deletes any backup dirs for backups older than x
As I am not fluent in bash, this is a mishmash of a bunch of scripts I found around the net.
Here is my script:
#! /bin/bash
BACKDIR=/var/www/wp-content/backups
#----------------------FTP Settings--------------------#
FTP=Y
FTPHOST="host"
FTPUSER="user"
FTPPASS="pass"
FTPDIR="/backups"
LFTP=$(which lftp) # Path to binary
#-------------------Deletion Settings-------------------#
DELETE=Y
DAYS=3 # how many days of backups do you want to keep?
TODAY=$(date --iso) # Today's date like YYYY-MM-DD
RMDATE=$(date --iso -d $DAYS' days ago') # TODAY minus X days - too old files
#----------------------End of Settings------------------#
if [ -e $BACKDIR ]
then
if [ $DELETE = "Y" ]
then
find $BACKDIR -iname '*.zip' -type f -mtime +$DAYS -delete
echo "Old files deleted."
fi
if [ $FTP = "Y" ]
then
echo "Initiating FTP connection..."
cd $BACKDIR
$LFTP << EOF
open ${FTPUSER}:${FTPPASS}@${FTPHOST}
mkdir $FTPDIR
cd $FTPDIR
mkdir ${TODAY}
cd ${TODAY}
mput *.zip
cd ..
rm -rf ${RMDATE}
bye
EOF
echo "Done putting files to FTP."
fi
else
echo "No Backup directory."
exit
fi
There are 2 specific things I can't get done:
The find command doesn't delete any of the old files in the local backup dir.
I would like mput to only put the .zip files that were created today.
Thanks in advance for the help.
|
Bash script to backup files to remote FTP. Deleting old files
|
The BACKUP command is not supported by older versions of HSQLDB. Use version 2.2.9 instead of 1.8.0.
|
I'm having a problem while trying to make an online backup in a HSQLDB database:
I'm using this rc file:
urlid pentaho
url jdbc:hsqldb:hsql://localhost:9001/hibernate
username PENTAHO_USER
password whatever
And this command line:
java -jar ..\data\lib\hsqldb-1.8.0.jar --rcFile conn.rc pentaho
Once connected, I try to execute the backup command as per the manual:
BACKUP DATABASE TO './' BLOCKING;
But all I get is this error message:
SQL Error at 'stdin' line 1:
"BACKUP DATABASE TO './' BLOCKING"
Unexpected token: BACKUP in statement [BACKUP]
The question is: does anyone have an idea of what I'm doing wrong?
|
Unexpected token: BACKUP in statement
|
1
Here's a recipe that shows you how to create a windows service using Python:
http://code.activestate.com/recipes/576451-how-to-create-a-windows-service-in-python/
Share
Improve this answer
Follow
answered Sep 21, 2012 at 8:49
FabianFabian
4,2382020 silver badges3232 bronze badges
0
Add a comment
|
|
I want to take back-up copy of my network dirs once in a day on specified time, Below is the code of my current work by which i am running it manually.
So i want to do this manual work as windows service which creates a back-up copy of specified network dir on particular time.
import tarfile
import datetime
def backup_htmls():
tar = tarfile.open('./InputHTML_bc/'+datetime.datetime.now().strftime('%b_%d_%Y_%H_%M_%S')+".tar.gz", "w:gz")
tar.add('\\\\192.168.211.65\\shared\\InputHTML\\', arcname="Backup_Tar")
tar.close()
I have the reference of how to run it as a windows service;
i just want how can i run this job once in a day on particular time(for example if i pass time as a parameter to python function and it will execute it once for that day.. or any other way you can do it.. in pythonic way)???
i know it will be very easy but i am not getting the idea from where i can start
anyway of doing it???
|
How to run python script once in a day that is creating back-up copy of network dir?
|
1
You can use Event Notification for the AUDIT_BACKUP_RESTORE_EVENT event.
The Audit Backup/Restore event class occurs whenever a backup or
restore command is issued.
Share
Improve this answer
Follow
answered Sep 15, 2012 at 11:19
Martin SmithMartin Smith
445k8989 gold badges755755 silver badges860860 bronze badges
2
i watched mssqltips.com/sqlservertip/2121/… here to create notifications. but not success. how use it? any sample please...
– AEMLoviji
Sep 17, 2012 at 9:25
i looked and tested this code. but when i am doing backup no row added to EventNotificationQueue. can you give some working sample?
– AEMLoviji
Sep 18, 2012 at 9:18
Add a comment
|
|
I'm using SQL Server 2008. now I want to create a trigger for capturing Database Backup. I watched to DDL triggers. but did not find anything about backup.
EDIT : Really what i need to do. if someone will backup database i want to drop database. maybe it is not good to do it with trigger or event notification. if so then advise alternative way please
Simply, how drop database when someone will backup database
|
Delete Database before it is being Backuped
|
1
Bittorrent is an excellent choice, as it handles both incremental updates and automatic resume after connection loss very well.
To create a .torrent file automatically, use the btmakemetainfo script found in the original bittorrent package, or one from the numerous rewrites (bittornado, ...) -- all that matters is that it's scriptable. You should take care to set the "disable DHT" flag in the .torrent file.
You will need to find a tracker that allows you to track files with arbitrary hashes (because you do not know these in advance); you can either use an existing open tracker, or set up your own, but you should take care to limit the client IP ranges appropriately.
This reduces the problem to transferring the .torrent files -- I usually use rsync via ssh from a cronjob for that.
Share
Improve this answer
Follow
answered Sep 11, 2012 at 10:52
Simon RichterSimon Richter
29k11 gold badge4444 silver badges6666 bronze badges
Add a comment
|
|
Hi it's a question and it may be redundant but I have a hunch there is a tool for this - or there should be and if there isn't I might just make it - or maybe I am barking up the wrong tree in which case correct my thinking:
But my problem is this: I am looking for some way to migrate large virtual disk drives off a server once a week via an internet connection of only moderate speed, in a solution that must be able to be throttled for bandwidth because the internet connection is always in use.
I thought about it and the problem is familar: large files that can moved that also be throttled that can easily survive disconnection/reconnection/large etc etc - the only solution I am familiar with that just does it perfectly is torrents.
Is there a way to automatically strategically make torrents and automatically "send" them to a client download list remotely? I am working in Windows Hyper-V Host but I use only Linux for the guests and I could easily cook up a guest to do the copying so consider it a windows or linux problem.
PS: the vhds are "offline" copies of guest servers by the time I am moving them - consider them merely 20-30gig dum files.
PPS: I'd rather avoid spending money
|
using torrents to back up vhd's
|
1
The user account you are connecting with doesn't have the permission to do a backup.
(That's what the error says)
Only users who are members of db_owner or db_backupoperator have the backup database permission by default.
Your hosting provider will probably provide some other backup mechanism.
Share
Improve this answer
Follow
edited Sep 6, 2012 at 12:33
answered Sep 6, 2012 at 12:27
podiluskapodiluska
51.2k77 gold badges9999 silver badges105105 bronze badges
Add a comment
|
|
I can't run SQL backup script on shared hosting through asp.net website
This is the error message:
BACKUP DATABASE permission denied in database 'TestDB'.
BACKUP DATABASE is terminating abnormally.
This is the code:
sqlcon.Open();
//query to take backup database
sqlcmd = new SqlCommand("backup database TestDB to disk='" + destdir + "\\" + dbname + "_" + DateTime.Now.ToString("ddMMyyy") + ".Bak'", sqlcon);
sqlcmd.ExecuteNonQuery();
//Close connection
sqlcon.Close();
|
can't run sql backup script on shared hosting
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.