Response
stringlengths 8
2k
| Instruction
stringlengths 18
2k
| Prompt
stringlengths 14
160
|
---|---|---|
How about using poundctl to disable and reenable the backend server? It must be run locally (the command protocol uses unix sockets), but you could probably have it launched remotely through an ssh session.
From the man page:
OPTIONS
[...]
-B/-b n m r
Enable/disable a back-end. A disabled back-end will not be passed requests to answer. Note however that existing sessions may still cause requests to be sent their way.
-n n m k
Remove a session from service m in listener n. The session key is k.
|
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Our production environments typically consists in 4-8 Apache web servers and 2 (My)SQL servers :
Each web server is affiliated to one SQL server
SQL servers have a circular replication setup
All web servers are load balanced, by Pound for example.
Every night a job backups one of the SQL servers, locking the affiliated web servers for about 10-15 minutes.
Is there a way to configure the balancing to avoid reaching those locked servers for a short time?
Is there another way to handle this lock, other than backuping a non-production third server?
PS: We envisage to reload the Pound configuration, just before and after the backup, with an appropriate configuration file, but it feels a bit odd...
|
Load balancers and SQL backups [closed]
|
1
The answer to complex file-copying or backup scripts is almost always: "Use robocopy."
Bill
Share
Improve this answer
Follow
answered Mar 20, 2013 at 20:08
Bill_StewartBill_Stewart
23.6k55 gold badges5151 silver badges6565 bronze badges
1
Thx for the answer Bill, i will be looking at it this weekend, cause for the moment im too backed up with my schooling. I read a lil about robocopy, hope it stands with the logging options im looking for! thx again
– user1056661
Mar 22, 2013 at 17:42
Add a comment
|
|
Really need help creating a script that backs up, and shoots out the error along the file that did not copy
Here is what I tried:
Creating lists of filepaths to pass on to copy-item, in hopes to later catch errors per file, and later log them:
by using $list2X I would be able to cycle through each file, but copy-item loses the Directory structure and shoots it all out to a single folder.
So for now I am using $list2 and later I do copy-item -recurse to copy the folders:
#create list to copy
$list = Get-ChildItem -path $source | Select-Object Fullname
$list2 = $list -replace ("}"),("")
$list2 = $list2 -replace ("@{Fullname=") , ("")
out-file -FilePath g:\backuplog\DirList.txt -InputObject $list2
#create list crosscheck later
$listX = Get-ChildItem -path $source -recurse | Select-Object Fullname
$list2X = $listX -replace ("}"),("")
$list2X = $list2X -replace ("@{Fullname=") , ("")
out-file -FilePath g:\backuplog\FileDirList.txt -InputObject $list2X
And here I would pass the list:
$error.clear()
Foreach($item in $list2){
Copy-Item -Path $item -Destination $destination -recurse -force -erroraction Continue
}
out-file -FilePath g:\backuplog\errorsBackup.txt -InputObject $error
Any help with this is greatly appreciated!!!
|
powershell backup script with error logging per file
|
1
You should directly use that variable as SqlConnection requires a string object containing a connection string and you are storing it in a string object itself.
So it would be simply like this:
using (SqlConnection connection = new SqlConnection(strCon))
{
SqlCommand command = new SqlCommand(sSQL, connection);
connection.Open();
command.ExecuteNonQuery();
}
Recommended: (to store it in Web.config)
<connectionStrings>
<add name="job" connectionString="Data Source=.\SQLEXPRESS;AttachDbFilename=|DataDirectory|\Database.mdf;Integrated Security=True;User Instance=True" />
</connectionStrings>
Then access it like this: (using System.Configuration;)
ConfigurationManager.ConnectionStrings["job"].ConnectionString
Share
Improve this answer
Follow
edited Mar 11, 2013 at 8:15
answered Mar 11, 2013 at 8:09
Vishal SutharVishal Suthar
17.1k33 gold badges6060 silver badges108108 bronze badges
7
now i got this error: Incorrect syntax near the keyword 'Database'.
– Will_G
Mar 11, 2013 at 8:20
You have used database keyword 2 times, so replace 2nd one with your database name. so finally it would be: BACKUP DATABASE @Your_db TO DISK = 'D:\\Database.bak';
– Vishal Suthar
Mar 11, 2013 at 8:23
What is the database name you want to take backup..?.
– Vishal Suthar
Mar 11, 2013 at 8:27
I have edit my Web.config file and I set using (SqlConnection connection = new SqlConnection(ConfigurationManager.ConnectionStrings["job"].ConnectionString)) But I keep getting this error:Incorrect syntax near the keyword 'Database'.
– Will_G
Mar 11, 2013 at 8:30
Hi @Vishal Suthar Database.mdf
– Will_G
Mar 11, 2013 at 8:31
|
Show 2 more comments
|
I want to backup my DB but I got a error:
ConnectionStrings cannot be used like a method
How can I resolve this?
string strCon = @"Data Source=.\SQLEXPRESS;AttachDbFilename=|DataDirectory|\Database.mdf;Integrated Security=True;User Instance=True";
string sSQL = "BACKUP DATABASE Database TO DISK = 'D:\\Database.bak';";
using (SqlConnection connection = new SqlConnection(ConfigurationManager.Connectionstrings(strCon).ConnectionString))
{
SqlCommand command = new SqlCommand(sSQL, connection);
connection.Open();
command.ExecuteNonQuery();
}
|
Backup a database
|
On android you can use the java.util.zip (try Zipping with android) to zip your files and upload to the server and on IOS you can use the ZipFile class (look at this question for an example) to do exactly the same. Zip is a standard so it will work both ways.
|
I have developed an application for backup and restore on IOS and ANDROID!
for backup i am just looping all files and uploading to server directly and to restore, doing the same to retrieve the data's back to phone.
But
1.now i have to compress the files and upload it to server,
2.also decompress it and restore it back to mobile
3.most important this must be done by cross platform method..
(i.e i have to take a backup from iPhone and restore it to an android phone.)
Is there any way out there related answers or suggestion's are welcome.
thank you
|
Compress files from mobile and upload it to server?
|
1
(Answered by the OP in a question edit. Moved into a community wiki answer. See Question with no answers, but issue solved in the comments (or extended in chat) )
The OP wrote:
here is what I got to work.
Sub SaveCopyas2()
Dim newWB As Variant
Dim wb1 As Workbook
Set wb1 = ActiveWorkbook
With wb1
.SaveCopyAs ("C:\Backup.xlsm")
End With
End Sub
Share
Improve this answer
Follow
edited Mar 20, 2017 at 9:39
community wiki
2 revsBrian Tompsett - 汤莱恩
Add a comment
|
|
I am trying to adapt this code I found online
Sub SaveCopyas2()
Dim newWB As Variant
Dim wb1 As Workbook, wb2 As Workbook
Set wb1 = ActiveWorkbook
If wb1.Saved = False Then MsgBox wb1.FullName, vbInformation, "Workbook Not Saved"
'Set a filename for new workbook
newWB = Application.GetSaveAsFilename(ActiveWorkbook.FullName, "Excel Files (*.xls), *.xls", , "Set Filename")
If newWB <> False Then wb1.SaveCopyAs (newWB)
End Sub
What this does, is it allows the user to run a macro and save a backup by specifying the location, and the name
What I am trying to do is just have it so that when the macro is run, the file is named "Backup" and the location is C:\
Can anybody help me fix this code to do what I am looking to do?
|
Excel 2007 VBA to save open workbook as backup without changing original
|
Wordpress built-in export (WordPress eXtended RSS or WXR) would contain your posts, pages, comments, custom fields, categories, and tags. Images can be donloaded from the old location (must be live) to the new one; be sure to check the "Download and import file attachments" box on import.
If there are galleries managed/created by some plugin then you'd have to have a more detailed look at the particular plugin used.
|
A Wordpress install on one of my servers has been compromised. What's the quickest way to export the gallery, posts and pages in a manner that won't export any back doors along with them? Then how do I import those into the fresh Wordpress installation?
I want to avoid copying any php files as the attacker may have left a back door. I also want to avoid copying the entire database because the attacker may have left a back door in there, too.
|
How would I copy the gallery, posts & pages from a compromised wordpress install to a fresh one?
|
The @{5 days ago} syntax relies on information from the reflog, as
explained in the section of the git-rev-parse documentation quoted below.
Reflogs are local to a repository, and never transferred by clone, fetch or
push. This is not the information displayed by git log, unless the -g or
--walk-reflogs option is used.
Bare repositories generally don't keep reflogs, so a copy of the repository
wouldn't have that information either.
<refname>@{<date>}, e.g. master@{yesterday}, HEAD@{5 minutes ago}
A ref followed by the suffix @ with a date specification enclosed in a brace pair
(e.g. {yesterday}, {1 month 2 weeks 3 days 1 hour 1 second ago} or {1979-02-26
18:30:00}) specifies the value of the ref at a prior point in time. This suffix may
only be used immediately following a ref name and the ref must have an existing log
($GIT_DIR/logs/<ref>). Note that this looks up the state of your local ref at a
given time; e.g., what was in your local master branch last week. If you want to
look at commits made during certain times, see --since and --until.
|
I'd like to try out a few things with git and I don't want to screw anything up in the working repository.
To try to keep things safe, I've made a copy of the bare repo that I work from and from this repo I am intending to do all my pushes and tagging. I used:
cp --preseve -r original.git copy_of_original.git
Although I understand one can undo bad commits and whatnot, I don't want to leave the repo with all these reverted commits, nor do I want to do any refactoring, hence my desire to just work from a duplicate, bare repository.
The problem is, I execute the following:
git diff --name-only master@{"5 day ago"} master
and get back:
warning: Log for 'master' only goes back to Fri, 15 Feb 2013 20:42:43 -0500.
The original repo, which I don't want to touch, does indeed have files which were modified as of 5 days ago.
If I perform git log on my copied repo, the record of these 5 day old changes are all still there.
What is going on here?
Is there a better way to make an independent copy of the repository?
Update 1
I realized I was imprecise with my question. I had run:
git diff --name-only master@{"5 day ago"}
in the directory produced from:
git clone copy_of_original.git clone_of_copy
|
Does copying the Git bare repo change the log?
|
1
Check technet>
Back up and restore an entire farm (Office SharePoint Server 2007): http://technet.microsoft.com/en-us/library/cc262412(v=office.12).aspx
Share
Improve this answer
Follow
answered Feb 15, 2013 at 21:37
SPRickyRickSPRickyRick
3922 bronze badges
Add a comment
|
|
I'm currently trying to move a single-server farm to an other server.
The old server is a windows 2003 32-bit with sql server 2005.
The new one is a windows 2008 32-bit with sql server 2008 r2 (32 bit).
Both MOSS 2007 have the same versions.
What I'm trying to do is to use a farm backup from the old server to restore it on the new one by using the restore tool on the central administration or by using the stsadm command, but it seems that it is not the best solution as the restore will fail.
In fact, what should I do on the new server ? Does it needs to exactly look like the old one ? Do I have to recreate all the web applications for instance ?
Is there any tutorial out there that could guide me step-by-step ?
Thank you for your help.
|
Restore a MOSS 2007 farm on a new server
|
the .spl is only a lockfile - no there is no need to back it up.
I think the only option is to backup the data files as you note.
Note if you have RT indexes, then there might be other extensions (I think .ram). You should also backup the Binlog.
Although do see 'FLUSH RTINDEX' - that makes for cleaner backups.
|
I have a Sphinx install on a server. I would like to copy the data created by sphinx during indexations (basically for backup or to provide exact copy for dev team).
Do I have another solution than copying files from {your_install}/data/*.sp* ?
(In this case, .spl files are protected, but they are empty, are they usefull or there are only lock files ?)
|
How to export or backup Sphinx indexes
|
1
Do like this:
Take full weekly backup with a unique name prefix like weeklybkp_
Then put a script like following in cron after every weekly backup.
DELETEMORETHAN=$(ls -1 weeklybkp* | wc -l)
if [ "$DELETEMORETHAN" -gt 4 ] ; then
COUNT=$(echo "$DELETEMORETHAN - 4" | bc -l)
rm -rvf $(ls -1t weeklybkp* | tail -${COUNT})
fi
Share
Improve this answer
Follow
edited Jan 18, 2013 at 5:18
answered Jan 18, 2013 at 4:55
SukuSuku
3,86011 gold badge2222 silver badges2323 bronze badges
Add a comment
|
|
I have a CentOS server running with backup made to an external HDD.
I run a full backup everyday at 4am and incremental backups every 2 hour. I am keeping the last 30 days backups which is achieved by running a cron job every day at 6am which clears all files older than 30 days:
0 6 * * * root /bin/find /mnt/hp/backups -mtime +30 -exec rm -f {} \;
Recently my HDD is getting out of space so I am changing my backup strategy to only keep 4 full backups for last 4 weeks. Eg, full backup of the every Monday.
How do I write a script to keep last 4 full backup for the past 4 weeks? I am using dump to perform the backups
|
How do I keep the last 4 full backups
|
Problem resolved by using -i flag.
So now I am using the command
tar -rvfEi /dev/rmt/0 <file>
:)
|
I am writing to a HP LTO4 tape drive. But after writing a big file (of the order 30GB) I am not able to write anything after that. I get
tar: directory checksum error
Anyone has any idea what could be wrong?
I am using the command
tar -rvfE /dev/rmt/0 <file.gz>
Need help!
|
Tar: Directory Checksum Error
|
PhpMyAdmin is the wrong place to be focusing. You will need to restore the original MYSQL tables and config.
This Stack Overflow post gives details about where to find these files in a WAMP installation.
|
I have a windows machine that was running phpmyadmin as part of WAMP.
The windows installation broke but I have a full disk backup.
Is there a way to restore a database to a new instillation from the disk without having to do the manual export procedure (as obviously I cant do this!)
Thanks
|
Restoring PHPMyAdmin from Disk
|
1
Rather than trying to script it yourself and test it, I would suggest using a commercial package (safer option since they have many customers and tested it under different conditions). One option would be to use software from Zmanda. They have a good solution for MySQL and support Amazon S3 so it should be fairly easy.
Share
Improve this answer
Follow
answered Mar 27, 2013 at 23:19
J StorageJ Storage
20611 silver badge33 bronze badges
Add a comment
|
|
I've a MySql db running on a Windows 2008 R2 machine. I want to back up my DB every night to Amazon S3.
I'm new to this field, so can anyone guide me through the steps.
Any blog link is also appreciated. Thank you!
|
How to backup Mysql database on Windows 2008 R2 to Amazon S3?
|
1
You need to (a) give the restored database a new logical name, and (b) you need to define new physical file names, so the existing ones won't be overwritten. Try something like this:
RESTORE DATABASE [Elmah_Restored] <== new (and unique) logical database name
FROM DISK = N'E:\Elmah_backup_2012_11_02_030003_1700702.bak'
WITH FILE = 1,
MOVE N'Elmah' TO
N'C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\Elmah_restored.mdf', <== new (and unique) physical file name
MOVE N'Elmah_log' TO
N'C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\Elmah_restored_log.ldf', <== new (and unique) physical file name
NOUNLOAD, STATS = 10
GO
Share
Improve this answer
Follow
answered Nov 2, 2012 at 17:57
marc_smarc_s
742k177177 gold badges1.4k1.4k silver badges1.5k1.5k bronze badges
2
Marc_S, thank you for that information. What I am trying to accomplish is not restoring the backup under a different name, what I want is the backup process to fail instead of automatically overwriting the database. Essentially I want the if exists then fail semantics but I want the restore job itself to fail without me checking whether the database exists. Is there a no overwrite option?
– Chris Magnuson
Dec 6, 2012 at 19:59
@ObligatoryMoniker: the T-SQL RESTORE statement I have shown in my answer will NOT overwrite an existing database. For that to happen, you would need to add a REPLACE clause (e.g. after the NOUNLOAD)
– marc_s
Dec 6, 2012 at 21:28
Add a comment
|
|
I have a ELMAH database that I want to script the restore of using the following:
RESTORE DATABASE [Elmah]
FROM DISK = N'E:\Elmah_backup_2012_11_02_030003_1700702.bak'
WITH FILE = 1,
MOVE N'Elmah' TO N'C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\Elmah.mdf',
MOVE N'Elmah_log' TO N'C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\Elmah.ldf',
NOUNLOAD, STATS = 10
GO
Even though I am not including WITH REPLACE each time I execute this statement it restores over the existing database.
I will always drop all databases before this operation and I never want this code to accidentally restore a database over one in production.
How do I change this code so that it will never overwrite an existing database?
I am actually doing this through SMO objects but the principle and results are the same so I am hoping to keep this simplified to just the TSQL necessary in the hopes that I can generalize that information to what needs to be set on the appropriate SMO.Restore object.
|
How to stop SQL server restore from overwriting database?
|
Use mysqldump on the live server to get everything. It will create all the necc statements: drop tables, create database, etc. Call mysql with the resulting file (as a < pipe ) into the dev server (after clearing it out, if need be) to have it read everything in.
To keep the dev server in sync with the live, set it up as a replication slave.
In one line (for one DB):
mysqldump --opt db_name | mysql --host=remote_host -C db_name
From: http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html
|
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I'm bringing the development server for a web application I work on back online. Currently it's extremely outdated (filesystem and database). Does anyway have a good/efficient way to do this?
Right now I am backing up the database from the live server so that I can import it into the development server, and then I am going to replace the files in the development filesystem with the live filesystem (most up to date).
This is the first time I will be doing something like this, and I DON'T want to mess up the database (well over 2GB in size). Can anyone give me some tips and recommendations?
Also, is there a way to have changes made to the live database sync back to the dev database, but without changes in the dev database being synced back to the live database?
Thank you!
|
Tips for bringing a development MySQL database back online? [closed]
|
1
I used the option "Quick Disk Image" (right click on a computer) when i need to perform a backup. Set the job to do not schedule and then configure the task as needed.
Share
Improve this answer
Follow
answered Jan 15, 2013 at 1:32
Paul MungPaul Mung
2911 silver badge1010 bronze badges
Add a comment
|
|
I am using Altiris Deployment Solution 6.9 SP5. I want to create a back up image of Linux OS/CentOS. I have created the job using the following tasks:
Reboot to automation
Create Disk Image
Reboot
The jobs is stuck at step 2 as the machine goes on sleep immediately after reboot. It tries to send Wake-on-Lan signal but that doesn't work.
Is there a way to keep the system alive until the job completes successfully.
The machine am trying to take back up of is a B series blade server. I could not find any power setting in the BIOS settings to enable Wake-on-Lan.
Please correct me if the process listed above to create a back-up image in DS 6.9 is incorrect.
A quick help is appreciated. Thanks!!
|
Creating Back up image of Linux using Altiris DS 6.9 SP5
|
1
Use GitHub or BitBucket. You have all the benefits of version control and a cloud storage for your repositories.
You can commit changes as often as you like, and only need traffic when you push or pull changes to or from the server. The version control systems are smart enough to sent only the modified files.
You could even have a team working on a local network, without the need of a cloud solution and only push to the cloud server periodically just for backup. To do that, you can create a script that pulls from your local repository and pushes to the server. That script can be run in a scheduler.
Apart from the service used to backup your files, I think you should use version control anyway. As a programmer I don't think you can live without.
Share
Improve this answer
Follow
answered Oct 16, 2012 at 22:11
GolezTrolGolezTrol
115k1919 gold badges183183 silver badges211211 bronze badges
1
1
+1 This should be the accepted answer. If you want your repositories to be private, you can host an unlimited number for free at Bitbucket as long as no more than 5 users are accessing them.
– Christian Specht
Sep 25, 2013 at 19:49
Add a comment
|
|
I have a "Projects" folder which contains dozens of Visual Studio projects. I want to create a backup for them. First I thought I should copy them all to my SkyDrive or DropBox folders and let them be synced to the cloud whenever there is a change.
The other strategy would be using a source control but I don't want the backup to take place whenever a change is made and it should be optimized. By that I mean, only the changed files and only the changed parts should be uploaded to the server to save my bandwidth. I don't have a very good connection (512 Kbps).
Also my codes are very valuable for me so security is very important to me.
Is there a way to achieve the automatic backup to the cloud (ideally free) and take advantage of the source control options (such as revisions, etc.)?
I'm sure a lot of people have solutions for this and a lot of people have the same problem so please let the question be answered instead of just clicking "close"!
|
Source code backup strategy
|
1
I'm not sure I have 100% understood what your problem is, and I don't know any function named BackupFile().
If what you want is reusing handles from NTCreateFile() with BackupRead(), it should be perfectly fine to do so, provided the file handle was opened with the right flags & permissions.
Be sure to call NTCreateFile with the FILE_OPEN_FOR_BACKUP_INTENT flag:
NtCreateFile(&handle, ..., FILE_OPEN_BY_FILE_ID|FILE_OPEN_FOR_BACKUP_INTENT, ....)
If you plan to pass the resulting handle to BackupRead().
Share
Improve this answer
Follow
edited Oct 15, 2012 at 14:50
Deanna
24k77 gold badges7272 silver badges156156 bronze badges
answered Oct 13, 2012 at 15:40
mbarthelemymbarthelemy
12.6k44 gold badges4242 silver badges4343 bronze badges
Add a comment
|
|
I'd like to open up a file by ID and then use the resulting handle in the Win32 API BackupRead()
Is this possible? I'm not certain if its 'okay' to use handles that come from NtCreateFile() in other Win32 APIs?
For example, may I do this
NtCreateFile(&handle, ..., FILE_OPEN_BY_FILE_ID, ....)
BackupFile(handle, ....)
I'm somewhat bothered by using NtCreateFile, it's well documented on MSDN but they also mention compatibility problems could occur
Any ideas?
|
how to open a file by fileID and then use BackupRead API?
|
1
The below command is syntactically incorrect.
$command = "mysqldump --all -databases > mybkp/backup.sql ";
It should be
$command = "mysqldump -u myuser -p mypass --all-databases > mybkp/backup.sql ";
EDIT:
Added the -u and -p flag. Ensure that you post your MySQL user name after -u and MySQL password after -p
Share
Improve this answer
Follow
edited Sep 29, 2012 at 11:06
answered Sep 29, 2012 at 10:49
verisimilitudeverisimilitude
5,07233 gold badges3131 silver badges3535 bronze badges
3
dear sir, when i used this correct syntax , then it says--------- mysqldump: Got error: 1045: Access denied for user 'myuser'@'localhost' (using password: NO) when trying to connect--------------------myuser is the username of my website cpanel
– sqlchild
Sep 29, 2012 at 11:02
there should no be that space between -p and mypass
– air4x
Sep 29, 2012 at 11:15
still it says----------sh: -c: line 1: syntax error: unexpected end of file X-Powered-By: PHP/5.2.17 Content-type: text/html
– sqlchild
Sep 29, 2012 at 11:23
Add a comment
|
|
I am just using mysqldump from php script , but it gives error saying unexpected end of file.
Please help , am stuck up.
**Error:**
sh: -c: line 1: syntax error: unexpected end of file
X-Powered-By: PHP/5.2.17
Content-type: text/html
Following is mybackupscript.php :
$command = "mysqldump -u myuser -pmypass mydb > mybkp/backup.sql ";
exec($command, $ret_arr, $ret_code);
If i use :
$command = "mysqldump > mybkp/backup.sql ";
it works successfully.
If i use :
$command = "mysqldump --all -databases > mybkp/backup.sql ";
error occurs saying : mysqldump: unknown option '-b'
Also, it creates the file backup.sql with the content :
Warning: The option '--all' is deprecated and will be removed in a future release. Please use --create-options instead.
|
unexpected end of file error in mysqldump via php script
|
make a backup of the existing site if you are restoring over a current version (in case changes have been made)
Create any folders required and assign permissions
Create the User/Application pools if this is a new machine
Otherwise seems reasonable to me?
|
Can anyone recommend an efficient way of restoring asp.net sites, currently the steps I take are:
1) Unzip site backup
2) Edit web.config connection string to reflect my local SQLEXPRESS database
3) Restore Database in SQL studio 2008
4) Delete primary user in the database
5) Add the removed user in the Security tab and set him as the db owner
If anyone has any thoughts on reducing these steps or a different system any advice would be much appreciated.
|
Efficient way of restoring ASP.Net site
|
You have:
net use U:\\...more stuff...
but there needs to be a space between the drive specification and the network share:
net use U: \\...more stuff...
|
I'm wanting to create a batch file that will back up my computer. This is what I have right now:
@echo off
:: variables
net use U:\\...more stuff...
set drive=U:\
set backupcmd=xcopy /s /c /d /e /i /r /y
echo ### Backing up My Documents...
%backupcmd% "%USERPROFILE%\My Documents" "%drive%\My Backup\My Documents"
echo Backup Complete!
@pause
It's simple enough. It works fine for a drive that is physically attached to my computer. However, I have a networked drive that I'd like to use for this task. I've tried a few things I found on forums, but to no avail. In the above code, I started attempting to use "net use." When I run it, I get "invalid drive specification" or "network path not found."
How can I get this to work?
Any help is appreciated.
|
Using xcopy to backup to networked hard drive?
|
1
MySQL binary log by definition is logging the data manipulations in binary form to save time and space for processing. So when you examine them, you should add --base64-output=decode-rows option to mysqlbinlog client have the the output decoded.
Share
Improve this answer
Follow
answered Jul 31, 2015 at 16:43
DevyDevy
9,89188 gold badges6363 silver badges5959 bronze badges
Add a comment
|
|
I have some binary log files from localhost but when i write
mysqlbinlog mysql-bin.000001 > statament.sql
or a text file it just returns gibberish i.e.
/*!40019 SET @@session.max_insert_delayed_threads=0*/;
/*!50003 SET @OLD_COMPLETION_TYPE=@@COMPLETION_TYPE,COMPLETION_TYPE=0*/;
DELIMITER /*!*/;
# at 4
#120919 11:08:54 server id 1 end_log_pos 106 Start: binlog v 4, server v 5.1.33- community-log created 120919 11:08:54 at startup
ROLLBACK/*!*/;
BINLOG '
potZUA8BAAAAZgAAAGoAAAAAAAQANS4xLjMzLWNvbW11bml0eS1sb2cAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAACmi1lQEzgNAAgAEgAEBAQEEgAAUwAEGggAAAAICAgC
'/*!*/;
# at 106
#120919 11:26:07 server id 1 end_log_pos 125 Stop
DELIMITER ;
# End of log file
ROLLBACK /* added by mysqlbinlog */;
/*!50003 SET COMPLETION_TYPE=@OLD_COMPLETION_TYPE*/;
So that is mainly what is bothering... I have not slaves set up ... any idea what I'm doing wrong ?
Thanks !
|
Binary Log MySQL
|
There are two main problems here.
First of all, change your set command into this: set hour=%time:~0,2% (hour without enclosing %-signs)
Next thing is, that times with a single digit hour value like 9:00 o'clock will be printed like:
" 9:00:00.00"
with a leading space in front of the first digit. This is the reason for your error message, as your file name might resolve to something like:
sageBackupLog-2012-09-11-hour 9.txt
This actually forms two parameters as the name is not enclosed with "
To overcome this and print a leading zero for these times, you can use a second set statement doing a string replacement, changing spaces to zeros:
set hour=%time:~0,2%
set hour=%hour: =0%
You can then safely use %hour% instead of set hour=%time:~0,2%0 in your script.
Hope that helps.
|
I have a script that does a simple XCOPY routine for backing up all our corporate files:
@echo off
IF %time:~0,2% GTR 7 (
IF %time:~0,2% LSS 21 (
XCOPY "R:\Sage Src" "S:\lastdata" /D /Y /E /R /K /C /H /I >> S:\sageBackupLog-%date:~-4,4%-%date:~-7,2%-%date:~-10,2%-hour%time:~0,2%.txt
XCOPY "R:\importantStuff" "V:\lastdata" /D /Y /E /R /K /C /H /I
EXIT /B 0
)
)
But ever since wrapping it all in the two IF statements, it no longer outputs the hour.
I have tried set %hour%=%time:~0,2% but it doesn't work, returning an invalid parameters error.
|
Batch IF causing hour not to show
|
Yeah that's fine, just keep it to like once a day. We run wal-e for continus protection on everything. Also please know while you can figure out the API for pgbackpus easily enough, it is not public and subject to change at anytime. I've broken it on people who scripted in the past, not on purpose or vindictively, but to fix pressing problems. So just be aware that what you're doing is not really supported, but we're not going to get mad.
You might also want to look into just using straight pg_dump.
|
Following on from this earlier question, I've had a look at the Heroku client gem and written a Python script that performs Postgres backups on Heroku in the same way as the pgbackups addon.
Since the removal of the auto-month option on the free database tier I wanted a way to perform backups automatically via Heroku Scheduler.
However before I use this script I want to make absolutely sure it doesn't violate any Heroku terms, as that is the last thing I want to do.
The exact functionality of this script is as follows:
In the "show" mode, it sends a GET request to the PGBACKUPS_URL as defined in the environment, querying the /client/latest_backup endpoint for the details of the latest backup taken.
In the "capture" mode, it sends a POST request to the PGBACKUPS_URL, endpoint /client/transfers, supplying the DATABASE_URL from which the backup should be taken.
This is exactly how the native Heroku client performs pgbackups. The script is written in Python because I needed a Python resource for my Cedar stack projects, and would be run on the server via heroku run.
Please could someone "in the know" tell me if this is considered OK or not?
Many thanks.
|
Can someone from Heroku confirm my backup script doesn't violate their terms?
|
The best way is to use rsync, as you would be uploading (most likely) changes only.
http://linux.die.net/man/1/rsync
Additionally you can create incremental backup:
http://www.mikerubel.org/computers/rsync_snapshots/
So my suggested solution would be rsync + crontab
|
So I have a PHP application running on linux machine which is using a mysql database. I have manage to add the back up my mysql database every day by adding a code in the CRONTAB. In my application clients are able to upload document, of which a saved in a directory in the application folder ie /myapp/uploaded_documents/, I am looking at backing up this directory.
My question is: how do I back up a directory to a certain remote location on a certain time every day? Is it possible to also password protect this directory on my app folder?
Thank you
|
Backing up uploaded document on Linux
|
1
You need to take a backup of the data folder. that's it.
Share
Improve this answer
Follow
answered Aug 30, 2012 at 10:34
StackistStackist
1111 bronze badge
5
Adding the configuration files in the backup will be a good idea too. Because if you lose them, you will have to recreate them.
– mavroprovato
Aug 30, 2012 at 10:45
@mavroprovato: yes. I agree. that's a good point. We should take backup of conf files too. Thanks.
– Stackist
Aug 30, 2012 at 10:53
You need to take the backup while Solr is not running though.
– javanna
Aug 30, 2012 at 11:54
@javanna: Yes. That's recommended. Solr should be stopped or it should be pointed to another index. Thanks.
– Stackist
Aug 30, 2012 at 15:44
@All, Thanks for your response. I have implemented with your suggestions. I have another question. Is there any way to restore the back up data, without stopping the Solr server?
– ak87
Aug 31, 2012 at 6:12
Add a comment
|
|
I have a doubt in Solr database backup. I want to know the number of records has been transferred from source to destination. Is it possible?
|
Taking backup in Solr
|
1
I recommend iCloud. iCloud API is intended to work with Core Data sqlite databases. For more information read Using Core Data with iCloud Release Notes.
Share
Improve this answer
Follow
answered Aug 29, 2012 at 17:27
AdamAdam
26.8k88 gold badges6363 silver badges8181 bronze badges
Add a comment
|
|
I have a standard iOS coredata application (sqlite). Users are requesting that they be able to backup and restore from backup their data. Can anybody tell me how to do so? Or some links where this is explained in detail? I don't mind using iTunes file sharing but would like to know how to implement this.
Also if the user restores a database if the database is not correct it should be rejected before replacing the existing database. I have searched the internet for this but did not find any examples.
|
Backup / Restore Coredata (sqlite)
|
1
This is probably not going to help you right now, but the best way would be to redeploy the script from its original source or to restore it from version control or from a backup.
Share
Improve this answer
Follow
answered Aug 22, 2012 at 21:57
Henk LangeveldHenk Langeveld
8,25611 gold badge4343 silver badges5858 bronze badges
Add a comment
|
|
How can we recover the deleted python script file(deleted using rm), say lostfile.py in debian linux box?
The file system of the linux box is jfs
|
recovering the deleted python script file
|
1
You can use ImpEx. https://www.vbulletin.com/docs/html/impex/
Share
Improve this answer
Follow
answered Aug 15, 2012 at 14:54
OldErOldEr
5122 bronze badges
Add a comment
|
|
How to merge old database with new database?
I need to add all stuff (topics, comments, users and all info) from one db to other...
vBulletin 3.7.3 (both dbs)
|
vBulletin merging old database and new database
|
1
You can copy files with Midnight Commander and then press skip button if you see that the file copy is stalled.
If you have just one file, it works perfectly. However, if there are too many files, it is hard to do manually. I wrote a bash script to deal with such situations.
Share
Improve this answer
Follow
answered May 28, 2015 at 13:36
Andrey SapeginAndrey Sapegin
47488 silver badges3333 bronze badges
Add a comment
|
|
i have to backup exiting /dev/sdX but cp hangs on one file (/proc/kmsg). It can't skip it... Just hangs like it's copying a 34KB file.
So how can i skip this file?
|
Copy files with cp (unix) but skip files which can't copy
|
This is a pure mysql commands solution which you could modify your code to do:
1.) CREATE TABLE new_table_name LIKE old_table_name
2.) INSERT INTO new_table_name SELECT * FROM old_table_name
Done. ;-)
This way you have an exact backup of your table previously, and joins are very easy to see the differences:
SELECT a.*, b.* FROM old_table a JOIN new_table b ON a.id=b.id WHERE <criteria>;
EDIT
UPDATE BACKUP SET COLUMN = (SELECT COLUMN FROM TABLE WHERE user_id=#) WHERE user_id=#;
|
I am working on a system which keeps track of what was in the field, prior to it being updated. I'd prefer using a table for the previous data, but am open to other options. This is some sample code which would accomplish the task :
<?php
$initial_value = $_POST['some_value'];
$id =231212213; // some id
$stmt = $mysqli->prepare("SELECT column FROM table WHERE user=?")
$stmt->bindParam("s", $id);
$stmt->execute();
$stmt->bind_result($column);
$stmt->fetch();
if ($column !="") {
//edit : it doesnt matter to me whether the data is moved into a new table or column
$stmtA = $mysqli->prepare("UPDATE another_table SET backup_column=? WHERE user=?");
$stmtA->bindParam("ss", $column, $id);
$stmtA->execute();
$stmtB = $mysqli->prepare("UPDATE table SET column=? WHERE user=?");
$stmtB->bindParam("ss", $initial_value, $id);
$stmtB->execute();
}
?>
|
PHP MYSQLi - Backup column data (into another table/column) before updating column
|
1
Here is the list of services, which should help you
http://jarvys.io - command line tool/service, that provides server backup to the cloud, simple restore process.
https://bitcalm.com - SaaS for the Linux server files and DBs backups. It has web ui to configure and manage the backups for all you servers. It backs up your data to Amazon S3 (you can use your own storage), recovery is really simple.
I don't have enough reputation to post more links. But you can google services like gobitcan, backuprun, tarsnap - all these are the services that can solve your problem too.
Share
Improve this answer
Follow
answered Jun 21, 2015 at 16:57
Yury AndreykovichYury Andreykovich
4622 bronze badges
Add a comment
|
|
I am having one ubuntu local server in which we used to have all our development websites. They all are php based sites. I would like to know whether we can have script or something to cron backup the files and database daily to external harddisc ?
Please let me know.
|
Backup websites with database in ubuntu server
|
Since the Express versions don't have the SQL Server Agent that executes jobs - no, I don't think this is possible.
What you could do however is create a standalone console app that uses the SMO library to perform the SQL Server backup, and then just schedule that console app on your machine using the built-in Windows scheduler, to runonce every day (or every four hours or whatever you need)
As for resources on SMO - check these out:
Getting started with SMO in SQL Server 2005
Using SMO for Backup, Restore and Security Purposes
|
In our software because the users are using SQL Server 2008 R2 Express, I would want to create backup jobs for them programmatically using the SMO API. Is it possible? If possible point me to any articles written on this light of this topic.
|
Is it possible to create backup jobs on SQL Server 2008 R2 Express with SMO
|
1
Have mysqldump create a local file, then use rsync with the --bwlimit option, or robocopy with the /IPG:n option to copy over the file to the network share:
@echo off
echo Running dump...
net use z: \\BackupComputerName\SharedFolder >c:\debuglog.txt 2>&1
cd C:\Program Files\MySQL\MySQL Server 5.5\bin
mysqldump.exe -uroot -pmypassword --result-file="%temp%\backup.sql" database_name
robocopy "%temp%\backup.sql" z:\ /IPG:100
echo Done!
Share
Improve this answer
Follow
edited Jul 21, 2012 at 21:03
answered Jul 21, 2012 at 20:49
Ross Smith IIRoss Smith II
11.9k11 gold badge3838 silver badges4545 bronze badges
3
Thanks very much for the quick answer. I haven't had a chance to test it out yet, but if it's copying the data over in small chunks, I'm hopeful that'll do the trick. I'll let you know what happens soon.
– musical_coder
Jul 23, 2012 at 5:25
So it turns out the problem was actually a thread in my ASPX application that was chewing up a huge amount of CPU. Once I cleaned up that and a few other things, the dump worked fine with my original script. Still, it's likely I'll be using your solution in the future once my database gets larger, so thanks again for posting it.
– musical_coder
Jul 25, 2012 at 4:16
Since I answered your question, would you mind upvoting it? Thanks!
– Ross Smith II
Jul 28, 2012 at 1:30
Add a comment
|
|
I have a system I'm developing as follows:
-At any one time, three or four clients which are pumping data to the central server over a TCP socket.
-A Windows 7 Pro, 2.4ghz Dual-Core Xeon server with 4gb (soon to be 8gb) of RAM, which houses both an C# ASPX web application that receives the client data, as well as the MySQL database that the data is then put into.
-A mini-PC that stores the database backups, which are done every night. The batch file that I run, which works fine when there's no incoming TCP traffic, is:
@echo off
echo Running dump...
net use z: \\BackupComputerName\SharedFolder >c:\debuglog.txt 2>&1
cd C:\Program Files\MySQL\MySQL Server 5.5\bin
mysqldump.exe -uroot -pmypassword --result-file="z:\backup.sql" database_name
echo Done!
Right now, the database dumps are only about 5 megabytes. Even so, I notice that when the Xeon server carries them out while there's incoming TCP traffic, the client TCP sockets are significantly slowed down, if not disconnected entirely. And the database dump doesn't make it successfully to the backup machine. In fact, it started running in the Windows Task Manager at 8:20pm last night, and as of right now (12:21pm), it's still listed as running! (With no backup file generated yet.) I don't think that it's a coincidence that one of my clients died about a minute after the backup task was triggered last night.
Any ideas for how to reconfigure the backup routine and/or system configuration to make this work would be greatly appreciated.
|
Small MySQL database backup interrupts TCP traffic
|
Well, IF apache (or whichever webserver) on that IP is already configured for your FQDN (which you couldn't set to the new IP yet), then you could upload a PHP file with a phpinfo() call. Afterwards you could telnet onto the known IP port 80 and send
GET /[PHP-File] HTTP/1.1
Host: [Your FQDN]
So e.g., if your FQDN was www.example.com:
GET /info.php HTTP/1.1
Host: www.example.com
The web server should then send the output of your PHP script.
Note that your lines have to end with a <CR><LF> and that there is a blank line after your Host:-line.
|
I'm in the process of re-installing a backup of a PHP script on a new server.
The previous web developer has gone missing in action and he has yet to release the domain in that's under his management.
So for the new server they only provided me with mySQL details and FTP access. They say they don't have temporary testing url so I can only upload the backup without testing it.
I changed the mySQL server variables but in the script there are also some server side specifics. For example the full path has to be defined in the config file. I only have FTP access so I can't run a simple phpinfo().
Is there any other way to get this specific server information with only FTP access? I basically need the full server path where the script will be installed.
Thanks!
|
Getting server information with only FTP access
|
1
It suffices to read the documentation. It tells you exactly how to use wget or curl for use with a CRON job. Moreover, there is a section called "A PHP alternative to wget". I write the documentation of Akeeba Backup and make available free of charge for a good reason: to be read and prevent such questions ;)
Share
Improve this answer
Follow
answered Jul 1, 2012 at 14:28
nikosdionnikosdion
11133 bronze badges
2
Neat! The first thing I was looking was the quick start guide, that's how I got the URL to call. After that I looked in the documentation also, but for some reason I did not read it to the end. Thanks for that.
– chrinetr
Jul 1, 2012 at 19:07
Hey, unfortunately it does not work properly. When I execute the script "Sorry, the backup didn't work" is returned. I just copied the script from the documentation and changed the three parameters at the beginning of the script. The password is correct I checked it and also just called the URL via the browser and it seems to work there properly. Any ideas where I can look for an error description? I checked the log, but I found nothing there, but I can't say that I understood everything written in there.
– chrinetr
Jul 2, 2012 at 17:35
Add a comment
|
|
we are using Akeeba Backup for backing up our Joomla website. It is possible to start a backup just by calling an URL as described here: https://www.akeebabackup.com/documentation/quick-start-guide/automating-the-backup.html. To automate the backup of our site we want to call this URL using a daily executed Cron job. Our web hoster supports the creation of Cron jobs, but you cannot use any shell scripts or something. Only the execution of a PHP script is supported. So we have to call this URL using a PHP script. I created this script and it works fine when calling it directly using my browser. But when I try to execute it using the Cron job I only receive error 302, which means, that the document has temporarily moved. I don't know what to do with that. This is the script I want to execute:
<?php
$result = file_get_contents("http://www.mysite.net/index.php?option=com_akeeba&view=backup&key=topsecret&format=r");
?>
I am not experienced with Cron jobs or PHP so any help would be nice.
Thanks for your time.
|
Start Akeeba Backup Using a Cron Job
|
The command you show is a Linux / unix shell command. If you add another step
grep -v turnkey
you will omit any lines with the word "turnkey" in them.
Like so:
find /var/lib/mysql -mindepth 1 -maxdepth 1 -type d |
cut -d'/' -f5 |
grep -v ^mysql\$ |
grep -v turnkey
tr \\\r\\\n ,\ `
You didn't ask if this is a good idea. I don't think it is, because it relies on a particular ondisk structure for the MySQL server daemon software that is not part of the formal specification of the system. In other words, it could change.
You could do this:
SELECT SCHEMA_NAME FROM `information_schema`.`SCHEMATA`
WHERE SCHEMA_NAME NOT LIKE '%turnkey%'
ORDER BY SCHEMA_NAME
|
I am following the guide at this site and trying to exclude databases with the name turnkey in them.
find /var/lib/mysql -mindepth 1 -maxdepth 1 -type d | cut -d'/' -f5 | grep -v ^mysql\$ | tr \\\r\\\n ,\ `
this command returns all the database names, how can I remove the turnkey ones?
|
ignore mysql tables in backup script
|
If you have mysql installed in your local setup, you can get a backup as just as the same way that you would take a backup of a database that resides in your local database server. For example, the following command would get you a backup of your database that you created under the RSS manager of StratosLive Data Service Server.
mysqldump -u your_username -pyour_password -h rss1.stratoslive.wso2.com extracted_host_name_from_the_JDBC_url_given_to_you database_name > local_file_system_path/backup.sql
Cheers,
Prabath
|
I have created database on the Stratoes live server and my databse URL is this.
jdbc:mysql://rss1.stratoslive.wso2.com/karshamarkuptool_karsha_opensource_lk
I tried Database Console> Tools> Back Up and it asking me these credentials
Target file name:~/backup.zip Source directory:
jdbc:mysql://rss1.stratoslive.wso2.com/karshamarkuptool_karsha_opensource_lk
Source database name: karshamarkuptool_karsha_opensource_lk
Are my credentials right? it says there is no database found on the source directory.
If not what is the way to get a backup from Stratoes database? How can I configure it to get automatic weekly backup?
|
How to take database backup form WSO2 Stratoes Live Data Services Server
|
1
The way I solved this problem was:
Take monthly (or weekly/daily) backup, reset master so it restarts the log files. Pipe backup to the master so it refills the database one table at a time. Restart the slaves.
I had many more slaves and the table reloads didn't take too long. If your backup takes too long, you may want to do this a different way.
If you lose a slave, you can just have it restart through the log files as long as you start them with a table reload. If you backup from one of the slaves, it's critical to insure it's in sync with the master first.
There may be other ways to do this, but having a log file fresh start periodically became very useful.
Master Code to go in cron (This is from a way back, you should verify it works for you):
#!/usr/bin/ksh
date=`date +%y%m%d`
mysql -u root db_name -e "flush tables with read lock;"
mysqldump -u root -pYrPass --add-drop-table --add-locks natl_inv > /path/to/backup/db_name$date
mysql -u root -e "reset master;"
mysql -u root db_name -e "unlock tables;"
mysql -u root –pYrPass db_name < /path/to/backup/db_name$date
mysql -u root -e "flush logs;"
On the slaves:
use show slave status command to verify you are in sync with the master. If you want to resync to the master, run:
slave stop;
reset slave;
slave start;
You may need to stop mysql, delete the slave bin log files then restart and run the above.
Share
Improve this answer
Follow
edited Jun 20, 2012 at 3:13
answered Jun 19, 2012 at 3:05
DTecMeisterDTecMeister
7644 bronze badges
2
I'm not following your procedure. Including commands I have to run on the master or slave would help.
– robsf
Jun 19, 2012 at 17:28
Can you mark this as answered, or is there something you still need?
– DTecMeister
Jul 9, 2012 at 13:28
Add a comment
|
|
I'm running MySql 5.1. I have Master and a Slave on 2 machines and I set up replication.
I do periodic backup on my slave server. I stop mysql, I copy all the files and I restart mysql.
In case I lose the Master, I can set up a new one from the last backup.
What If I lose the Slave? Can I restart the slave from the last backup? Am I supposed to keep track of the position of the replication every time I to a backup?
|
How to restore a slave from a mysql backup?
|
1
They're intentionally preventing users from restoring backups that are "foreign" to them in order to satisfy an obscure Microsoft security recommendation.
You will have to perform a schema comparison and a data comparison between your local machine and the empty database on the hosting to generate the scripts to re-create all of the objects and data. (Having those scripts available in a source control storage would also be helpful.)
Share
Improve this answer
Follow
answered Jun 12, 2012 at 17:38
Mr. TAMr. TA
5,30011 gold badge2828 silver badges3636 bronze badges
Add a comment
|
|
I am registered at Go Daddy and want to restore the database there from my local machine. The tool they provide me with doesn't work unless it's from them. I'm trying to restore from my local SQL server, but when I browse I can't restore the local files of the remote database.
|
Restore to remote database from my local machine
|
1
Well, for one thing, it requires API Level 8 (Android 2.2) or greater, so currently about 6% of devices with Google Play can't use it. Otherwise, I think it's a safe assumption that the vast majority of devices with 2.2+ and Google Play have access to it.
Share
Improve this answer
Follow
answered Jun 7, 2012 at 9:18
Darshan Rivka WhittleDarshan Rivka Whittle
33.4k77 gold badges9494 silver badges112112 bronze badges
2
Yes, I understand that it's >= level 8, but that's not what I asked. Making assumptions always ends in tears. I want evidence so that I may make an informed decision.
– James
Jun 7, 2012 at 9:36
2
@James It may not be what you meant to ask, but the information I provided answers the question you quite literally did ask (re-read your question if you don't believe me!) The manufacturer and carrier have the ability to customize or remove the transport layer, so it's not guaranteed to be there. However, in my experience it usually is.
– Darshan Rivka Whittle
Jun 7, 2012 at 10:33
Add a comment
|
|
The document here states that:
'Data backup is not guaranteed to be available on all Android-powered
devices'
Are there any examples of when the backup service is not available on a device? Is the backup service guaranteed to be there if the user has installed the app via Google Play (i.e. they have a google account)?
Thanks
|
When is Android's backup service not available?
|
Version control at the table level is usually done by adding a version column to your existing table. So if you had a posts table you would add a version column with a simple int or date value. Every time a post is edited a new record is inserted and the version number bumped. You can reference this version number in another table or have an additional active field to denote which post is the latest/active.
If you're talking about rolling back the database I would use the mysqldump command to dump and mysql to restore.
|
I've been working on a CMS and wanted to add a backup/restore function. I want it to backup a saved entry every other time it is edited so that if a client makes a change they are unhappy with or screws something up they can go back and choose a previous state to restore from.
The problem I'm having is trying to wrap my head around the concept of coding this functionality. I have created a new table called "backup" with the fields "ID, date, time, pageName, and pageContent" but going on from there I find myself stuck.
Would I add on to my edit script a new query that saves a duplicate to the backup table? And how could I get it to only backup every few saved edits?
Here I have my save edit query, if it helps any. Thanks in advance! :)
<?php
$_POST['entry'] = mysql_real_escape_string($_POST['entry']);
$sql="update pageEntry set entry=\"$_POST[entry]\", pageName=\"$_POST[pageName]\" where id=\"$_POST[id]\"";
$result=mysql_query($sql)or die('Bad Query');
echo "<div id='edit2'>The file has been uploaded, and your information has been added to the directory<br /></div>";
?>
|
Creating a PHP CMS with an entry backup/restore option
|
You need to have a LAMP stack installed on your local machine. In addition, you'll need to modify the settings.php file to change the database connection strings to match your local enviornment. Youi may also need to modify the $base_url variable in the settings.php.
THis would not be necessary if you were simply restoring, but since you're moving the install it is required.
|
So I wrote this script that basically creates a sql dump of the the drupal databases as well as created a tar of of the www directory. I took this off the server and put it on my local machine. I want to take these backup files and test to see if the backup is stable as well as to learn the process.
My problem is that I can't find any clear instructions on how I would be able to do this. Can anyone give me a hand?
Any help is much appreciated.
|
Drupal Backup and Restoring
|
Reverting is fairly simple create a new database and call it another name so say your current db is magento1 create magento2 with your backedup data, and make sure your current magento db user has rights to the new database.
Now edit app/etc/local.xml change <dbname><![CDATA[magento1]]></dbname> to <dbname><![CDATA[magento2]]></dbname>, clear all caches and your site should now be pointing at the old data.
This is a quick fix method, but it seems like that is what you are looking for right now.
|
My magento site which was working fine before an hour now giving error as I have imported
customer data from csv file with REPLACE Esisting Complex Data mode as we want to update some of our customer data.
but when I tried to add product into the cart from frontend with existing customer it gives me error as-:
Item (Model_Customer_Customer) with the same id "11" already exist
exception 'Exception' with message 'Item with the same id "11" already exist' in /htdocs/localhost/magento/lib/Varien/Data/Collection.php:373
Can somebody guide me what should I do now??
Or tell me how can I revert back to original database as I take backup of database daily.
So I have yesterday's backup of database of my site.
So how can I revert back to yesterday's database?
plz plz help me....
|
Magento Import Customer Error...Now Applicatiion itself not running?
|
From the databases viewpoint we are dealing with an unclean shutdown (i.e. power went off) and a lost connection so it will discard all transactions that are not committed.
If you are taking a snapshot of the server that's just like freezing everything in a cryogenic sleep, then after a restore the database would just awake expecting to talk to a non existing application.
The only issue that i can see is not from a transaction itself but from the fact that the database itself resides inside files. What if you freeze a file half-written to disk. I can see how that might be a problem. On the other hand there's probably some architectural design in place to prevent this as the same is true for a power outage and a database should live through that too.
|
Let's say your taking daily snapshots of your server as a whole (public_html, sql files, etc) for later restoration in case of system failure.
Is it possible that you can restore damaged a MySQL InnoDB if you took the snapshot while an uncommitted transaction was taking place? Or will InnoDB do just fine and discard "incomplete" transactions on restoration?
|
Can server snapshots potentially damage MySQL transactions?
|
Sudo code to copy the database to SD-Card. To copy it back, simply reverse the streams.
public boolean copyDatabase() {
String SDCardPath = Environment.getExternalStorageDirectory().getAbsolutePath();
// Create the directory if neccesary.
File directory = new File(SDCardPath + <PATH TO SD-CARD SAVE LOCATION>);
if (!directory.exists())
directory.mkdir();
// Close the database before trying to copy it
database.close();
// Copy database to SD-card
try {
InputStream mInput = new FileInputStream(<PATH TO DATABASE ON PHONE>);
OutputStream mOutput = new FileOutputStream(SDCardPath + <PATH TO SD-CARD SAVE LOCATION>);
byte[] buffer = new byte[1024];
int length;
while ((length = mInput.read(buffer)) > 0) {
mOutput.write(buffer, 0, length);
}
mOutput.flush();
mOutput.close();
mInput.close();
} catch (Exception e) {
}
return database.open();
}
|
I have an application in which I am updating my SQLite db file, but every time when the user uninstalls this application this db file is deleted and at reinstallation a new db file is created. I want to copy this db file after every update to a micro SD card so that after uninstallation I could be able to access my database.
Goal
Copy every time when db file is updated
say text.db when created
now I want to copy this db file to micro SD card
this db file (application db file) is being updated
now copy and replace this db file to micro sd card
|
Copy SQLite db to Storage device for backup purpose
|
I believe you have to do it in two steps. Something like:
rsync -dv * remote:dir/
rsync -rv some_folder3 remote:dir/
Read The Fabulous Man page.
|
I have a directory structure as follows
Client_Site/
some_folder1/
some_folder2/
some_folder3/
Lots of files
some_folder4/
Client_Site2/
and so on. I want to target the some_folder3 within the directory structure of each client site and only backup that folder with rsync, but maintain the rest of the directory structure as empty folders.
Is this possible? What would my include file look like?
|
rsync include files from specific folder?
|
Can you set up a third server which is monitoring both application servers for health? This server could then decide appropriately in case one of the servers appears to be gone: Instruct the hot standby to start processing.
|
I've got a writing and reading database application holding a local cache. In case of an application server fault a backup server shall start working.
The primary and backup application can only run exclusively because of its local cache and some low isolation level on the database.
As far as my communication knowledge goes it is impossible to let both servers always figure out who is allowed to run exclusively.
Can I somehow solve this communication conflict through using the database as a third entity? I think this is a quite typical problem and there might not be a 100% safe method, but I would be happy to know how other people recommend to solve such issues? Or if there is some best practice to this.
It's okay if both application are not working for 30 minutes or so, but there is not enough time to get people out of bed and let them figure out what the problem is.
|
Failover strategy for database application
|
1
There is no way to get the exact backup size until you try backing up the database.
Approximate size can be guessed at by looking at database file sizes. This is useful if you have no idea how big it will be and you need to "publish" some estimate to the unfortunate user whose disk is full. But the real database sizes will likely be much smaller.
SELECT CAST(SUM(size) AS DECIMAL) * 8192 from sys.database_files
Given that you will have to add that try-catch clause anyway, much easier and more accurate estimates can be obtained from looking at the last backup size like this:
SELECT TOP 1 database_name, backup_size FROM msdb..backupset ORDER BY backup_finish_date DESC
You can compare either of these values to the amount of free disk space. It may be useful to add some margin to the number obtained by the latter method to account for some expected gradual database growth.
Share
Improve this answer
Follow
answered Mar 20, 2012 at 9:31
Jirka HanikaJirka Hanika
13.4k33 gold badges4848 silver badges7676 bronze badges
Add a comment
|
|
We want to take db backups programmatically. For this we used the SqlServer.Management.Smo namespace and its classes, and it is working perfectly.
Now we have a requirement to determine whether there is enough space is there in the location to hold the backup files before saving the db backup to the specified location.
And if there is not enough space, we want to alert the user of that fact.
One way we found is that put a try catch and catch the exception if there is no enough space in memory. But we are looking for a solution to get the size before saving it.
|
Get SQL Server database backup size in .Net
|
Make a regular dump with MySQL dump or use another specific database back-up tool. A copy of the data folder is not ok.
MySQL dump will really read the data and can be checked. It is not always true that all data is written completely to the data file and lockings give issues.
If you have a specific time of back-up just run a cron before that moment and verify whether it is safe and finished. MySQL will take care of lockings, changes, transactions etc.
Always, read always, verify your back-up with a restore test.
|
not sure its a stack overflow question
I have a Mac and am hosting a Apache MySQL server on it using MAMP Pro. If I back up my data on the time machine, is MySQL database also backed up or do I have to create mysqldump and backup up as a cron job? In case of a crash do I do a normal restore in case it can be backed up on time machine.
Thanks
|
Backup MySQL on Apple time machine
|
You can not restore a later version backup to a previous version. SQLServer 2008 R2 is the later version. Your Alternative is to script out all data and database objects and run those scripts on SQL 2008.
How to Script out your database
|
I have taken a backup from a database(Sql server 2008 R2).But in machine i have
Sql server 2008.When i am trying it to restore that backup in machine then it is
showing Error.Following is the error message==
The database was backed up was running on a server running version 10.50.1600.
That version is incompatible with the server which is running on version
10.00.1600. So,how can now i restore my file
|
Backup in not getting restore
|
I am not aware about such a "mixed" storage mode for the hadoop. So I do not think that your scenario is directly supported by hadoop.
For me it looks like you need more "elastic" solution. If EMR would be available open source - it might be good choice - where NAS would play the role of S3.
I would suggest the following solution in Your case:
Install and run data nodes on all available servers. They are not as resource hungy as task trackers - since they are only sequentially read/write data.
Install task trackers on all machines also, but run only on these which are not used now. Hadoop is smart enough to preserve data locality when possible. In the same time hadoop will takes change in number of task trackers much easier then disappearing data nodes.
Alternatively you can build cluster of task trackers only, not use HDFS and run jobs against the NAS.
In all cases the main interference with other users I still expect is network congestions - during shuffle stage hadoop is usually saturating the network.
|
I have Hadoop running on a cluster that has non-dedicated nodes (i.e. it shares nodes with other applications/users). When the other users are using a cluster's node, it is not allowed to run Hadoop jobs in that node. Thus, it is possible that only a few nodes are available in a given moment, and that this few nodes do not have all data blocks (replicas) need by the Hadoop job.
I also have a big Network-Attached Storage that is used for backup. So, I am wondering if there is a way to use it as a secondary storage for Hadoop. For example, if some data block is missing in the cluster, Hadoop would get the block from the secondary/backup storage.
Any ideas?
Thanks in advance!
|
Is there a way to have a secondary storage or backup for data blocks in Hadoop?
|
of course can duplicity verify existing backups. it just does not do this as it assumes that you are sensitive about traffic costs (s3 etc.).
simply do as described in
P.S. I have found this, but it is used to see "what files, if any, have changed since the last backup" and not to verify the integrity and consistency of a backup file.
before the backup and conditionally start a full if that fails. be aware that the completre backup chain has to be downloaded for this so it will nearly double your traffic.
ede/duply.net
|
I have heard that Duplicity is a nice tool for doing incremental backups.
The only thing I am concerned with is verification.
How does Duplicity check the backup for being consistent before it syncs it to a server? Does it actually do this?
It would not be nice to find oneself facing a corrupted backup file issue when trying to restore a backup.
As I understand, the basic workflow of Duplicity is the following:
Generate a delta from a directory which has to be backed up;
Sync this delta to a remote storage.
Is there any verification of this delta between 1 and 2?
P.S. I have found this, but it is used to see "what files, if any, have changed since the last backup" and not to verify the integrity and consistency of a backup file.
|
How to verify a file before backup?
|
You have the flexibility to do a few different things here, but I think starting your repo at the root / and tracking your files recursively is the way to go. Git will pick up the files in your subfolders.
The key advantage to this is that you basically have snapshots of your entire app, and can easily checkout the repository when creating a development or staging environment.
Git is great for managing the code changes, but I'd still suggest using something like rsync to backup the entire site every once in a while. That way, you can only track files that are going to change consistently (code, user generated content, etc) instead of backing up images/audio/other that are NOT going to be changed. Better to be safe than sorry :)
|
I have a website with multiple folders that have different purposes on the site. Think
/
/forum
/drupal_site
/contact
/about_us
Now how should I handle backups for these? Should each folder have its own repository? Should there be one for the entire site? Is Git the way to go in the first place?
Does it not make a difference and is up to whatever is more convenient to me?
Is there any "standard" way to do this or each person does it differently?
Also should each of those folders have their own js css and img folders or should that be shared across the site (only in the root)?
|
How can I backup a website with multiple folders via Git?
|
1
In your service you can loop through the list of files, find the ones that are x number of days old and delete them.
See the File, FileInfo, Directory and DirectoryInfo classes on MSDN.
Share
Improve this answer
Follow
answered Feb 5, 2012 at 17:25
OdedOded
494k101101 gold badges887887 silver badges1k1k bronze badges
Add a comment
|
|
I have a service that produces .txt files to a folder every 30 seconds. Is there anyway to delete files that are older than x amount of days?
|
How to delete a specific type of files after x amount of days?
|
In my opinion using a version controll system is not overkill at all. You can keep hand on changes all the time. Making features that you don't even know whether to include or not into main branch. Not only features but every complicated task also, for example refactoring. With VCS that's no problem to make fast fix to released product during development of a huge task which is incomplete.
I can't imagine working without any VCS. I prefer Git because is fast and easy.
|
My app is getting to the point where I will be highly perturbed if I lose the source somehow. This is a personal / single developer project, so something like Subversion might be overkill. I'm thinking more along the lines of the "Backup" agent that is a part of the GExperts add-on in the Delphi world. Is there such a thing (that would backup all the .java, .xml, sqlite, etc. files) specifically for or suitable for the Android platform?
|
How to backup relevant files?
|
A quick peak on the command line shows these are deprecated commands.
mysql> help backup; Name: 'BACKUP TABLE' Description: Syntax: BACKUP TABLE tbl_name [, tbl_name] ... TO '/path/to/backup/directory'
*Note*: This statement is deprecated and is removed in MySQL 5.5. As an alternative, mysqldump or mysqlhotcopy can be used instead.
I'd say any advice on how to use deprecated commands is a bit of a misnomer. Take a peak at mysqldump. There are other options as well such as LVM snapshots etc.
|
I understand that I can use MySql's command BACKUP and RESTORE to backup a database and rollback when needed.
My question is, would I be able to execute it this way:
sql="BACKUP my_db TO DISK my_backup_folder WITH FORMAT #";
if ($stmt = $this->connect->prepare($sql)) {
$stmt->execute();
$stmt->close();
} else {
$error = true;
$message['error'] = true;
$message['message'] = CANNOT_PREPARE_DATABASE_CONNECTION_MESSAGE;
return json_encode($message);
}
And the restoration made in the same fashion:
sql="RESTORE DATABASE my_db FROM DISK my_backup_folder WITH FILE #";
if ($stmt = $this->connect->prepare($sql)) {
$stmt->execute();
$stmt->close();
} else {
$error = true;
$message['error'] = true;
$message['message'] = CANNOT_PREPARE_DATABASE_CONNECTION_MESSAGE;
return json_encode($message);
}
And in each case what does # stand for, is that .bak ? And is there anything else I should add besides what's in there ?
|
Create MySql Database Backup / Rollback
|
It looks like datastore_admin was removed for Python 2.7. For more details:
http://groups.google.com/group/google-appengine/browse_thread/thread/32db6e9e8e55a5c6?pli=1
|
I have datastore admin enabled in one app and the backup system seems to work or at least it reports being launched as mapreduce jobs in the queue. Now I want to make that for my other app but the admin console fails when hitting the datastore admin link:
These are my settings from app.yaml:
builtins:
- remote_api: on
- datastore_admin: on
- appstats: on
- admin_redirect: on
- deferred: on
What am I doing wrong? It works for one app and not the other. I have not enabled federated login so it should work.
Traceback (most recent call last):
File "/base/python27_runtime/python27_lib/versions/1/google/appengine/runtime/wsgi.py", line 187, in Handle
handler = _config_handle.add_wsgi_middleware(self._LoadHandler())
File "/base/python27_runtime/python27_lib/versions/1/google/appengine/runtime/wsgi.py", line 236, in _LoadHandler
__import__(cumulative_path)
ImportError: No module named datastore_admin
What should I do to enable datastore admin and the backup system there? It works for my other app and that is also python 2.7 / GAE.
Thanks
|
Fixing my datastore admin?
|
1
No. AFAIK, Not possible on non-rooted phones. Android restricts access to only your app specific content.
Share
Improve this answer
Follow
answered Jan 12, 2012 at 2:49
kosakosa
66.3k1515 gold badges133133 silver badges169169 bronze badges
Add a comment
|
|
Does anyone know if it is possible to access the data files from another app on your app? Really just copying a directory plus contents. This is for a backup system. I am trying to create a backup system that works decently on non rooted phones. I currently can only copy data from my own app. I can't navigate further up the file system. Getting a permission denied.
|
Android: Gaining access to other app files through app on non rooted phone
|
you just need to change the third input of the tar command
$filename= "Backup.tar"; // The name (and optionally path) of the dump file
$ftp_server = "IP"; // Name or IP. Shouldn't have any trailing slashes and shouldn't be prefixed with ftp://
$ftp_port = "21"; // FTP port - blank defaults to port 21
$ftp_username = "User"; // FTP account username
$ftp_password = "Pass"; // FTP account password - blank for anonymous
$filename = "public_html/backups/" . $filename . ".gz";
$command = "tar cvf ~/$filename /public_html/*";
$result = exec($command);
$command = "gzip -9 -S .gz ~/$filename";
$result = exec($command);
|
$filename= "Backup.tar"; // The name (and optionally path) of the dump file
$ftp_server = "IP"; // Name or IP. Shouldn't have any trailing slashes and shouldn't be prefixed with ftp://
$ftp_port = "21"; // FTP port - blank defaults to port 21
$ftp_username = "User"; // FTP account username
$ftp_password = "Pass"; // FTP account password - blank for anonymous
$filename = "public_html/backups/" . $filename . ".gz";
$command = "tar cvf ~/$filename ~/*";
$result = exec($command);
$command = "gzip -9 -S .gz ~/$filename";
$result = exec($command);
This is my working backup that I use. It backs up everything on the server including emails (for example /mail/. I only want to backup the /public_html folder and all subdirectories under it. It creates a tar.gz file in the /public_html/backups/ folder. The PHP script also runs from the /public_html/backups/ folder. Any idea on how to restrict what is saved from '/' to '/public_html/' ? Thanks!
|
PHP Backup Script to backup public_html only
|
The purpose of a database dump, or of any backup data, is to provide a snapshot of what the database looked like at a particular point in time so that if something catastrophic happens you can revert to that version of a database. If you make changes to the database and want a more up-to-date dump that reflects those changes, the solution is not to modify a dump you already created; rather, it is to make a new dump from the database. When you are satisfied, you can then delete the older dump if space constraints are acute.
|
Is there a way to update a mysqldump file?
Let's say I created a dump file for a table that archive the data over 3 month old and remove those archived records from the original records. And I want to update that same file every week with the new data that expires and retaining the old archived records. I know there are some other way to do this with a more legit backup solution, but sadly I dont have much room here to play around with server configuration and such. Best thing I can do some corn jobs.
Thanks for the input in advance.
|
updating Mysqldump file
|
1
If you use CTR (counter) mode, I believe you will get the result you require.
Share
Improve this answer
Follow
answered Dec 26, 2011 at 17:23
Jonathan LefflerJonathan Leffler
741k142142 gold badges925925 silver badges1.3k1.3k bronze badges
1
Or a stream cypher would give the same effect. A block cypher in CTR mode is effectively a stream cypher.
– rossum
Dec 27, 2011 at 0:27
Add a comment
|
|
I would like to encrypt a file with most secure algorithm that also meets the following requirement.
Let's say we have a text file that has 100 Bytes and we encrypt it.
Now we change 1 byte in original file and encrypt again.
If we make a diff of the encrypted files then ideal encryption algorithm should produce the shortest diff possible - e.g. 1 byte.
(Essentially I want to do a incremental backup of encrypted files and minimize bandwidth requirements)
|
What encryption algorithm preserves file differences?
|
All object definitions you can get using SHOW CREATE TABLE, SHOW CREATE VIEW, SHOW CREATE TRIGGER, ... and so on; all objet names you can get from information_schema system database.
Only 'INSERT INTO table' statements you should generate itself.
|
i am creating a small tool using C# and MySQL, this tool has a small db in MySQL i want to backup the DB as a .SQL file that i can again load to MySQL when needed.
the problem is how to get .SQL file on client side using any query etc. and load it back to db.
i don't want to use sql dump or any other method for which i have to write a batch file or call an external process on client.
is there any simple and straight forward way to it with c# programming
help needed
regards.
|
My sql database backup in a .SQL file on client side?
|
1
You could use Amazon AWS .Net SDK. You can download it from here:
http://aws.amazon.com/sdkfornet/
Here's the example function to download file from S3:
function DownloadS3File([string]$bucket, [string]$file, [string]$localFile)
{
if (Test-Path "C:\Program Files (x86)")
{
Add-Type -Path "C:\Program Files (x86)\AWS SDK for .NET\bin\AWSSDK.dll"
}
else
{
Add-Type -Path "C:\Program Files\AWS SDK for .NET\bin\AWSSDK.dll"
}
$secretKeyID= $env:AWS_ACCESS_KEY_ID
$secretAccessKeyID= $env:AWS_SECRET_ACCESS_KEY
$client=[Amazon.AWSClientFactory]::CreateAmazonS3Client($secretKeyID,$secretAccessKeyID)
$request = New-Object -TypeName Amazon.S3.Model.GetObjectRequest
$request.BucketName = $bucket
$request.Key = $file
$response = $client.GetObject($request)
$writer = new-object System.IO.FileStream ($localFile ,[system.IO.filemode]::Create)
[byte[]]$buffer = new-object byte[] 4096
[int]$total = [int]$count = 0
do
{
$count = $response.ResponseStream.Read($buffer, 0, $buffer.Length)
$writer.Write($buffer, 0, $count)
}
while ($count -gt 0)
$response.ResponseStream.Close()
$writer.Close()
echo "File downloaded: $localFile"
}
Share
Improve this answer
Follow
answered Dec 14, 2011 at 7:41
Andrey MarchukAndrey Marchuk
13.4k33 gold badges3737 silver badges5252 bronze badges
1
That's cool and all but I am trying to figure out how to do this using the Cloudberry Powershell cmdlet. This is a powershell and SQL Agent problem. I have looked at the Amazon .NET SDK but my shop that I work in does not want to re-invent cloud software. Cloudberry costs $40 and I cost much more an hour.
– D3vtr0n
Dec 14, 2011 at 16:10
Add a comment
|
|
I am trying to automate my SQL database backup process. My goal is to use the Cloudberry Powershell cmdlet to give me direct control and access over my S3 buckets. I am able to do this manually but cannot get my SQL jobs to work with this.
According to Cloudberry's installation instructions, I shouldn't have to register the Cloudberry Powershell snap-in if Powershell is already installed. I have found that to be false. I have tried to register it, both 64-bit and 32-bit with no luck.
This works when executed manually/explicitly from the ISE:
Add-PSSnapin CloudBerryLab.Explorer.PSSnapIn
$today = Get-Date -format "yyyy.MM.dd.HH.mm.ss"
$key = "mykeygoeshere"
$secret = "mysecretgoeshere"
$s3 = Get-CloudS3Connection -Key $key -Secret $secret
$destination = $s3 | Select-CloudFolder -path "ProductionBackups/MyClient/log/" | Add-CloudFolder $today
$src = Get-CloudFilesystemConnection | Select-CloudFolder "X:\backups\MyClient\current\"
$src | Copy-CloudItem $destination -filter "log.trn"
^ When this command is executed in a SQL Agent job, it fails with this message:
Executed as user: DB-MAIN\SYSTEM. A job step received an error at line 1 in a PowerShell script. The corresponding line is 'Add-PSSnapin CloudBerryLab.Explorer.PSSnapIn'. Correct the script and reschedule the job. The error information returned by PowerShell is: 'The term 'Add-PSSnapin' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. '. Process Exit Code -1. The step failed.
I read in this blog post that SQLPS.exe cannot execute 'Add-PSSnapin' commands? Is that true? I cannot find any clarification on the subject...
how can I automate my SQL backup files to the Amazon S3 cloud? I have tried everything. TNT Drive was a huge waste of time. I am hoping Cloudberry can do it, any tips?
|
How to use Cloudberry Powershell Snap-in (for Amazon S3) from within a scheduled SQL Agent Job?
|
From my knowledge, it's not possible to restore a 2.X backup to 1.9.
|
So that is the question. I need it because I create my course in Moodle 2.1 but there is Moodle 1.9 installed in my University. So I need the way to export/restore my course to older Moodle version. Thanx for any help.
|
Is it possible to restore course backup from Moodle 2.x to Moodle 1.9(8)?
|
1
Have you looked at using linked servers? We had a somewhat similar data consistency issue and used a linked server setup to provide for triggered data propagation. Once you have the linked servers defined you can issue your statement pretty much as you have it listed in your question.
http://msdn.microsoft.com/en-us/library/ms188279.aspx
Share
Improve this answer
Follow
answered Oct 24, 2011 at 20:40
CarthCarth
2,31311 gold badge1717 silver badges2626 bronze badges
1
yes I have found my data is not consistent and therefore can not be imported using bulk import. Database backup does work fine though. Link Server is new to me, will take a look. Thanks
– TheTechGuy
Oct 24, 2011 at 20:47
Add a comment
|
|
I found this question Copy table to a different database on a different SQL Server which is close to what I want but my two databases are on two different machines. I am interested in backing 1 or two tables, not the whole database. I tried BCP backup and bulk insert but I am consistently getting error on importing date field (type mismatch or invalid character for the specified codepage). I gave up after I successfully imported the peice of csv file that I was getting error for in a new test table.
Now I would like something like this
select INTO mycomputer\SQLEXPRESS\target_table from ReMOTECOMPUTER\SQLEXPRESS\source_table
or anything similar? Can I do that, what is the proper syntax if yes. I tried but was not successful.
|
copying data between servers on two different machines - SQL
|
1
I've successfully used Amazon's S3 to store the "output" data of web and non-web applications. Using a service like that is beneficial from the single-point-of-failure perspective because then any other instance of that web application, or a different type of client, on the same server or in a completely different datacenter still has access to the same output files. Another similar option is Rackspace's CloudFiles.
Both of these services are very redundant, and you could use them as the back, and keep the primary storage on your server, or use them as the primary and keep a backup on your other web server. There are lots of options! Hops this info helps.
Share
Improve this answer
Follow
answered Oct 23, 2011 at 4:48
Chris CherryChris Cherry
28.4k66 gold badges7070 silver badges7171 bronze badges
Add a comment
|
|
I am making a DR plan for a web application which is hosted on a production web server. Now that web server also acts as a file storage for storing the feed uploads files (used by the web application as input) and report files( output of web application processing). Now if the web server goes down , the files data is also lost, so need to design a solution and give recomendations which eliminates this single point of failiure.
I have thought of some recommendations as follows-
1) Use a seperate file server however it requires a new resources
2) Attach a data volume mounted on the web server which is mapped to some network filer ( network storage) which can be used to store the feeds and reports. In case the web server goes down , the network filer can be mounted and attached to the contingency web server.
3) There is one more web server which is load balanced however that is not currently being used as file storage , and if we can implement a feature which takes the back up of the file data regularly to that load balanced second web server , we can start using that incase the first web server goes down. The back up can be done through a back up script, or seperate windows service , or some scheduling job for scheduling the backup job every night.
Please help me to review above or suggest new recommendations to help eliminate this single point of failiure problem on the web server. It would be highly appreciated?
Regards
Kapil
|
Web Server being used as File Storage - How to improvise?
|
Mysqldump is a simple and reliable way to do backups.
Explore the documentation:
http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html
If you want to copy your dumps on another server, you can write a bash script that copies dumps over ssh.
Here are some links that may help you:
http://www.cyberciti.biz/tips/howto-copy-mysql-database-remote-server.html
http://christiank.org/wp/2010/12/pipe-a-gzipped-mysql-dump-over-ssh/
|
I need to find a good backup solution for the site I have hosted with HostGator. HostGator only backs up sites with less than X number of files. Unfortunately I've exceeded that limit and they aren't backing up my site any more.
This means that right now i have no database backups. I do have backups of all the files on my computer, which I'm good with. But I need to find a solution that backs up my databases with no manual maintenance. HostGator doesn't offer additional backup services.
My site has cPanel access.
Any suggestions for solutions?
|
Backups for my site, file and/or SQL
|
xcopy provides methods for copying files based on their Archive attribute. The option you would likely want is /M, which copies only files with the Archive attribute set, and resets that attribute. It kind of relies upon the Archive attribute being set, but Windows does this by default (I think) when creating or modifying a file.
For example (a rubbish example, but an example nonetheless):
C:\tmp>echo hello > out.txt
C:\tmp>xcopy /M *.* ..
C:out.txt
1 File(s) copied
C:\tmp>xcopy /M *.* ..
0 File(s) copied
C:\tmp>echo hello > out2.txt
C:\tmp>xcopy /M *.* ..
C:out2.txt
1 File(s) copied
Only files that are new/modified since the last copy are copied.
Alternatively, depending on your Windows version, you could look into the much more powerful (and hence more confusing) robocopy.
|
I have an Oracle archives folder on windows, which I need to take an incremental backup everyday at 6:00AM.
I need to copy all the files generated during the previous day, and place them in a folder with today's date.
What is needed is that, the files generated after the last backup was taken, only should be copied [i.e, the file names with the sequence after yesterday's backup's last file].
I tried xcopy, but it doesn't provide any facility for copying files based on modified time.
I need to write a batch script for this, please help me out!
|
Incremental archives backup batch script
|
How about something like this :
set fromField to text 7 thru -1 of (do shell script "cat /test.eml | grep From:")
set dateField to text 7 thru -1 of (do shell script "cat test.eml | grep Date:")
set toField to text 5 thru -1 of (do shell script "cat /test.eml | grep To:")
set subjectField to text 10 thru -1 of (do shell script "cat /test.eml | grep Subject:")
The body is a little harder since you need to decide if you only want the emails body or also all the previous emails that are embedded into the body. The following gets the body of my test email.
set temp to do shell script "cat /test.eml"
set text item delimiters to "--"
set temp2 to (text item 3 of temp)
set text item delimiters to "
"
set messageField to paragraphs 6 thru -1 of temp2 as text
Make sure you watch out for the encoding of the file if you use other characters.
|
I write an AppleScript to backup all my emails. A lot of emails I have already saved as .eml files on my local hard drive and deleted them from the server. Is there a way to load the .eml files with AppleScript as message to get the date sent, subject etc. of them?
|
AppleScript - Get information of .eml file
|
Library/WebKit is included in the backup. With the exception of Library/Caches, everything in the Library directory is backed up.
The data included in an iCloud backup for your app is identical to that included in an iTunes backup, so you can examine the contents by backing up to iTunes and using a tool like iPhone Backup Extractor to see what's included.
|
I have a phonegap iOS app using the sqlite DB of Webkit (through UIWebView), and I wonder if the sqlite data will be saved with iCloud Backup (iOS5). The sqlite data are stored in Library/WebKit folder. In the apple doc, they say:
The placement of files in your application’s home directory determines what gets backed up and what does not. Anything that would be backed up to a user’s computer is also backed up wirelessly to iCloud. Thus, everything in the Documents directory and most (but not all) of your application’s Library directory.
But it can say exactly which folder in the library directory are not saved. And I don't know how to access iCloud to check if the directory is saved
|
Is iOS 5 iCloud Backup save webkit data for app using the storage of a UIWebView?
|
No, that does not appear to be a feature included in the PG Backups addon provided by Heroku: http://devcenter.heroku.com/articles/pgbackups#restoring_from_a_backup
|
Thankfully this is a hypothetical, planning-ahead sort of a question. Can you restore part of a database using Heroku's backup addon, or otherwise? So, for instance, only restore records in all tables which have a client_id of 5?
|
Partial database recovery on Heroku
|
phpMyAdmin includes this functionality. You may want to look into how they do this.
|
Unfortunately, the hosting can't use mysqldump command.
|
Use PHP script to backup database without using Mysqldump, any recommended CLASS?
|
Why didn't you write this question in the library forum?
OK, here is some sample code:
function SaveFolderToBigTableFile(const aFolder, aFile: TFileName): boolean;
var SR: TSearchRec;
BT: TSynBigTableString;
aPath: TFileName;
Path: RawUTF8;
begin
DeleteFile(aFile);
result := true;
BT := TSynBigTableString.Create(aFile);
try
aPath := ExtractFilePath(aFolder);
Path := StringToUTF8(aPath);
if FindFirst(aPath+'*.*',faAnyFile,SR)=0 then
try
repeat
if (SR.Name[1]='.') or (faDirectory and SR.Attr<>0) then
Continue;
if BT.Add(StringFromFile(aPath+SR.Name),StringToUTF8(SR.Name))<>0 then
writeln(SR.Name,' added') else begin
result := false;
writeln(SR.Name,' ERROR');
end;
if BT.CurrentInMemoryDataSize>100000000 then
BT.UpdateToFile;
until FindNext(SR)<>0;
finally
FindClose(SR);
end;
finally
BT.Free;
end;
end;
The trick is to use a TSynBigTableString class using the file name as key.
You can add very fast compression just by using our SynLZ library (much faster than zip, but of course with a bit less compression ratio).
|
How can I save a whole folder with it's files and folders using Synopse Big Table? I need to make a backup of my files without compression. I heard that Synopse Big Table is good for this purpose. But I couldn't find info to accomplish that.
Thanks!
|
Delphi: save a folder with files using Synopse Big Table
|
1
u can use sqLiteDatabase.which is very easy to use.all the things r handled with cursor.which means at a time indicating a single row in a database.check this link:
http://www.anddev.org/novice-tutorials-f8/working-with-the-sqlite-database-cursors-t319.html
Share
Improve this answer
Follow
answered Sep 9, 2011 at 3:32
tasnimtasnim
2111 bronze badge
Add a comment
|
|
I'm trying to backup & restore SMS and contacts informations and wondering about the way to do that. Keeping in mind that:
I want a lightweight app
Regarding to reading/writting delays and mechanisms specific to each files type
I'm looking for the easiest way to do this backup and restore process.
Which one of .txt , .xml and SQLite .db files will fit my needs? Or there is another efficient way?
|
Txt, XML or database for backup and restore purpose?
|
Android Backup is a cloud based service. There are no locally stored backup files. The backup service stores them somewhere in the google cloud.
|
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 12 years ago.
The example BackupRestore - Backup and Restore runs successfully. But, I can't understand the backup file, because, I can't read that file. How can I use the file? Can anyone clarify this doubt for me?
|
Problem with Android backup file? How to read it? [closed]
|
I think you should:
Create file which will store you login/password and set minimal permissions on it.
Create bash script/php/perl script which will run mysqldump command and read settings from this file.
Set this script to cron.
But if you run cron under root so you can specify user/password directly in cron because onlyrestricted number of users can look through this file.
|
I want to schedule a cron job that either uses mysqldump directly, or calls a script that does the mysqldump. My question is since mysqldump requires a password to be supplied, is it secure to do mysqldump directly as a cron job? If not, while using a script, what's the most secure way of protecting the password?
|
MySQL DB Backup - Secure way
|
MySQL Reference Manual :: Using Triggers
|
I figured it would be easy to just insert a new row in some sort of history table using PHP, (containing date, table, column, value etc) on each UPDATE operation, but having MySQL do that automatically in some way would be way more efficient.
Also, the restoring part could be simply a (quite unefficient) PHP script, unless it could be done with a single query (not sure how)
Thanks in advance
|
What's the simplest way to efficiently backup and especially undo UPDATE operations in a MySQL db?
|
1
Well, if we change "maintaining the most recent copy" by "copying only the modified files", then this command do that:
xcopy "C:\Base Files\*.*" C:\Backup /m /s
Regards...
Share
Improve this answer
Follow
answered Jul 30, 2011 at 2:52
AaciniAacini
66k1212 gold badges7272 silver badges109109 bronze badges
Add a comment
|
|
I am trying to create a batch file that will allow me to copy files that are scattered across several directories into a single location while maintaining the most recent copy available.
This is for a Windows machine.
For example...
C:\Base Files\*.jpg
C:\Base Files\Sub\*.jpg
C:\Base Files\Sub2\*.jpg
and copy all of these to C:\Backup.
I am trying to do something like the following...
FORFILES /p "C:\Base Files\DIR01\My Images" /s /M *.JPG /c "copy @file C:\SANDBOX\DIR02"
But it dumps out each time right away with a "File not found" message.
Thanks in advance for your help!
|
Copyfiles from Directory Tree to Flat Folder - Keep most Recent
|
The backup process started working after I checked in a file that had been checked out by a user on my client's production server.
I found out what file that was by opening the corrupted backup file and looking at the title of the last entry.
|
I got a backup of a Sharepoint 2010 site that I created from our client's production server so that I can make some new changes to it on my staging server.
I can restore the site collection from the backup without a problem but when I try to create a backup of the same site on my staging server, I always get the error "Operation is not valid due to the current state of the object".
Before the error is given however, a small part of the backup file is created. If I try to run the Backup-SPSite again, it always fails at the same point and the corrupt backup files are always the same size.
Going through the logs it looks like the problem might be related to user permissions. I wonder if it's possible that the user permissions, user data, etc that came over from the client's production server are somehow screwing the backup process now because the same data cannot be found on my staging server.
The same error is mentioned here http://technet.microsoft.com/en-us/library/ee748617.aspx but UseSqlSnapshot parameter doesn't work anyway in my case.
I've been hitting my head against the wall with this problem and would appreciate if anyone has any advice on what might help! :)
The setup:
Windows Server 2008 R2
Sharepoint 2010 Server (no SP1 because it hasn't been installed on the client's production server)
Microsoft SQL Server Express Edition
Cheers!
|
Backup-SPSite gives an error: "Operation is not valid due to the current state of the object"
|
Enabled iTunes file sharing in your app's settings and then write your file to the Documents folder in your app. This will then allow them to see the file through iTunes.
|
I want to implement a backup feature in the app I'm working on, that simply puts an image and some data in a folder that can be accessed through itunes or similar.
But is this possible, and what can be done?
Point of the feature is that if (for some reason) the image and data isn't sent to my server, the user will have the ability to extract it to a pc/mac, so that the image and data isn't lost.
Any help is appreciated.
|
iPhone: Saving picture/data to "public" folder?
|
There is a free product called MailStore that should do the trick. Find it at http://www.mailstore.com
|
I have a Microsoft Exchange account through my college, and my mailbox is very close to full. I'd like to download the messages to my local machine and archive them so that I can access them later if needed.
It seems to me that I should do something using POP3, but I have no idea where to start.
Note: I am currently using Ubuntu 11.04, but I can boot into Win7 if necessary.
|
How can I create a backup of my Exchange emails?
|
1
In your case, Amaxon's S3 seems more fitting but that's not free.
Depending on your target audience, you can create a local archive and have that picked up by your regular backup solution. You might try Wuala,or SpiderOak. Expand Wuala by adding your own space. Spideroak is free up to 2GB (more if you invite friends), and also provides a good alternative to Dropbox (if you want to see how to migrate from dropbox to spideroak see my blogpost about that).
Share
Improve this answer
Follow
answered Jul 4, 2011 at 6:45
RolfRolf
7,18866 gold badges4141 silver badges5656 bronze badges
Add a comment
|
|
I made a small backup application that simply creates an archive out specified files and folders. Now I need an online service to backup that online. Which service can i use that can be integrated into my app ?
Options I considered:
dropbox is ideal, but they have all but abandoned the desktop.
skydrive has no api.
I couldn't find any free reliable backup service that uses ftp .
anything else ? it should provide 1-2 gb of free space and be reasonably reliable.
Thanks
My app is in C#, but can be ported to any other language as well..
|
online backup solution with api for desktop
|
Your description makes it sound like you want to control your revisions. Use a Revision Control System like SVN or Git (http://en.wikipedia.org/wiki/Git_%28software%29).
At a high level, once you make changes to your code, you check those changes in and get a revision number that specifically identifies the most current state of the files at that time (e.g. revision 17). You could then switch to your production server and update the set of files to revision 17 and then compile/whatever. If all does not go as planned, it is easy to revert to any previous version (say revision 15, which was on the production server before you checked out and built revision 17).
A good RCS will manage all your files so that they are "backed up" and can you can restore the code/binaries/config files/whatever you had at any given time by referring to its revision number.
|
Lets say I am updating an application and have updated 10 files out of a 100. I made these changes on my test machine and is working but now I want to commit these changes to the production machine. What kind of option do I have.
Ideal scenarios would be.
I save the existing files that I am replacing
I copy new files over the old.
If the changes does not go well, I can reverse the process and restore the old files. What is the best way of doing it. Lets say I am talking about an ASP.NET application. I will also be updating SQL statements, but I can take core of that in separate SQL Script and not worried about that in this update.
I am familiar with Batch scripting but is there a better way of doing it, which has a GUI (Ok I can can create a GUI application and run the batch file from GUI).
Any one who has done something similar?
|
What kind of script can I use to update an application?
|
Well, as ImpEx does not support it, you may install another forum software (phpbb, smf, ipb), import your data to this installation using their tools which likely support importing from vb4, and after that import content from this installation to your new vbulletin 4 installation using impex.
There are some disadvantages however:
You may have to re-set your permissions because of the differences in permission systems in different software.
Your user's passwords will become invalid and they will have to recover them via email. This, however, might be considered as a plus because your db was compromised, password hashes and salts could have leaked and knowing this it isn't very hard to bruteforce someone's password on modern hardware.
|
I have a vBulletin installation that was recently defaced as a result of a sql injection flaw in the VB search interface. I'd like to move all of the posts/threads/permissions/users from the old database to a new database in which I just freshly installed vBulletin. Their impex program won't help with this, as it seems to only be able to import data from old versions of vBulletin, not from one vBulletin 4.x database to another. Does anyone know how this can be accomplished?
|
How do you move users/posts/settings from one vBulletin installation to another?
|
My gut instinct tells me this is not a good idea. It might be a better plan to save a link to where the VS 2008 back up is on Sharepoint, but adding a huge load of data to any sort of storage device is always going to be a bit fraught.
For example, whilst you can save images and files into SQL Server directly, most people prefer to save the files separately, and store the retrieval path in the database.
|
I am trying to save a Visual Studio 2008 project to a SharePoint library as a backup.
I dont know how. Please explain me if it does make any sense and how it is possible to import huge project with a lot of files into EMEA online Sharepoint 2007 Portal. I dont have MOSS, just WSS 2007.
|
does it make any sense save a Visual Studio Projects in SharePoint Online Portal?
|
My bad... It turns out VS had changed the DataContext connection string to DTZConnectionString1 and made another settings file. Now I am using the correct connection string it is working fine.
No idea why the incorrect one worked in debug but not release.
|
I have a small c# wpf app for doing some simple calculations running off a Sql Express 2008 R2 db, and in the setup section is a backup button that runs the code
using (DTZDataContext db = new DTZDataContext())
{
db.ExecuteCommand(string.Format("BACKUP DATABASE DtzDb TO DISK = '{0}'", filename));
}
The connection string I am using is
Data Source=.\SQLEXPRESS;AttachDbFilename=|DataDirectory|\App_Data\DTZ.mdf;Integrated Security=True;User Instance=True;Database=DtzDb
I have confirmed there are no other open db connections at this point in the application and the backup runs fine in debug, but once I compile for release (running from VS or as standalone) I get the error:
Operating system error 32 “32(failed to retrieve text for this error Reason 15105)” BACKUP DATABASE is terminating abnormally.
How can I fix this, preferably without having to install Sql Management Studio on each machine and attaching the DB? What is the recommended way of doing backups? Why does it work in Debug but not Release.
Many Thanks
|
DB Backup runs in Debug but not Release
|
You could look at this C# example for using SMO to backup the database
http://social.msdn.microsoft.com/forums/en-US/sqlexpress/thread/95750bdf-fcb1-45bf-9247-d7c0e1b9c8d2/
The other option is to run a process and call sqlcmd from code
http://www.sqldbatips.com/showarticle.asp?ID=27
Here is some sample code to execute process
string fileName = @"C:\Backup.sql";
ProcessStartInfo info = new ProcessStartInfo("sqlcmd", @" -S .\SQLExpress -U sa -d mydatabasename -o C:\sqlout.txt -i """ + @fileName + @""" -P");
info.UseShellExecute = false;
info.CreateNoWindow = true;
info.WindowStyle = ProcessWindowStyle.Hidden;
info.RedirectStandardOutput = true;
Process p = new Process();
p.StartInfo = info;
p.Start();
|
I am completely lost here and I am running out of time. Let me explain my situation:
I have created a software in C# express 2010 and SQL Server Express 2008 R2.
Now in the settings section of my software, the user is supposed to be able to make manual back ups/restores of the database.
Also he is supposed to be able to schedule back ups of the database, which will run at time that he will set.
I do not have a clue on how to get both of these working and I am hoping that someone here may point me in the correct direction.
I need to be able to create a
backup/restore of the database from
the click of a button
I need to be able to schedule backup
processes
Please keep in mind that when the user will install the software on his computer, he will not have sql server installed (I am saying this because I am under the impression that SMO requires sql server to be pre installed on the client machine).
Thank you
|
c# express back up/restore database
|
Those things like logins are stored in SQL Server master tables - they are not part of your backup of abcDB.
Users on the other hand are part of your database, so they get backed up together with your data.
So when you restore your database on another computer, your database will contain the users, but if they depend on a login, that login might be missing from that SQL Server's master database (and will need to be re-created, e.g. as part of your installation)
|
i am using sql server express and visual studio on my office PC.
i have a database abcDB. i created a login abclogin with password = 'abcpass' , default database =abcDB ,
now a winform application, is allowed access to abcDB when correct id and password is supplied via textbox.
if i want to use the same application on my home pc, then what tasks would i have to perform :
create an .exe of my application
install the application from its .exe on my home pc
install sql server on my home pc (do i have to install the same version as on my office pc)
i would not install sql server management studio, as i only have to use the application and not to play with the database which i do on my office pc)
start using the application
but what about the database. i would backup it and restore on my home pc. will this also take the backup of the 'abclogin'? or will i have to create it again from sqlcmd or installing management studio on my home pc .
when i created abclogin default database =abcDB ,then did it store this detail in the abcDB.mdf file or it is stored somewhere else?
|
backup database takes login and user backups or not
|
There are approximately six million backup scripts floating around the web.
Here is one I wrote about eight years ago, it still works for me.
|
I am trying to create some backup scripts written in perl.
I am just after some examples best practises etc..
Problem:
Backup files and directories from various locations on a system, is it better to move to a temp location then tar and zip them? or just do it from where they are?
Hope someone can help.
Thanks!
|
Perl Backup Script Advice and Examples Please
|
You wouldn't be able to monitor something like that - you're exec()ing an external program from PHP, so PHP is suspended while that external process is active. As well, tar can't report in advance how much there is to do, it can only report on what it's done or is currently doing.
A progress bar works on the basis of (how much is done / how much there is todo) * 100. Without knowing in advance exactly how many files/bytes there are to back up with tar, you can't calculate a completion percentage.
|
I know this is a rookie question but whats actually happening with a progress bar what is it actually monitoring.
I figure the only way to understand it is to get this answer.
when i run a piece off php code how can i get a progress bar to monitor it while it runs untill completion off the code??
most progressbars i have seen are file upload and download. is it monitoring the amount off requests the browser sends when upload the file so its monitoring the bytes that are being transfer?
Can someone please give me a better understanding off whats going on?
I just want a simple progress bar to monitor progress while my php code is running so say if i am running this simple code to backup a websites files and folders, how could i do this???
<?php if ( isset($_POST['backup']) ) {
if(exec("cd {$_SERVER['DOCUMENT_ROOT']}/wp-content/plugins/s3bk/files;tar -cvpzf backup.tar ".get_option( 'isd-server')."")) { echo "done"; }
}
<form id="backup" method='post' action=''>
<?php if (function_exists('wp_nonce_field')) { wp_nonce_field('s3bk-updatesettings'); } ?>
<input type='submit' class="button" name='backup' value='backup'>
sorry for the rookie question just really want to get my head around whats actually happening???
Thanks
|
progress bar whats actually happening
|
If I understand properly, you want to audit just some of your data for changes (as opposed to having a full DB backup) and to be able to selectively roll them back to a previous state.
If that's what you want to do, you may do that from within sql by using a "audit" table and populating it via triggers on the table where your original data is stored (i.e. each time a row is inserted/updated/deleted the trigger will write on the "audit" table what the previous value was and when it was changed).
See here for an example.
|
I'm looking for some feedback as to what is the best way to back data up from a sql server database which can be restored at a later date. This back up needs to be in a file which the user can use to restore data at a later date.
Does anyone have any ideas or examplea as to the best way to do this using C# and Sql Server?
EDIT:
This shouldn't back the whole database, just a set of data specified by the user using dates.
Thanks
|
C# Backing up data to file which can be restored later
|
1
I have done something similar. Two things were key to making this architecture work.
The image files must be stored in a consistent location relative to the Core Data store file.
The URL/path stored in Core Data for the image is a relative one, not absolute. The image path is made relative to the Core Data store file.
Relative Location on Filesystem
I stored all images in a subdirectory named "Images" that lived in the same directory as the Core Data store file. E.g.
parent directory/
--> MyCoreDataStore.sql
--> Images/
-----> SomeImage1.png
-----> SomeImage2.png
Essentially, I treat the application's data as a file package: a directory containing multiple related files that all together comprise the application data or data document.
Relative image URLs
URLs (or paths) to images must be constructed at runtime. The Core Data store holds a relative path/URL to each image's location. The location is relative to the location of the Core Data store file. Your app always knows the full path/URL to the Core Data store.
I build the full path to an image location when needed using the parent directory of the Core Data store file and the relative image path held in the Core Data db.
Share
Improve this answer
Follow
answered Jun 17, 2011 at 19:36
Bill GarrisonBill Garrison
2,0491515 silver badges1818 bronze badges
Add a comment
|
|
I use Core Data to store the URL path of the image file in my document, how to backup both of them to mac/PC, I copy the sqlite file and the image out through iTune, and when I copy them back to iPhone, the sqlite seem to be ok, but the image didn't show up. I think the path has changed.
what can I do?
|
How to backup Core Data sqlite file with image for my iphone app?
|
If you don't have binlogs enabled, or cannot be sure at which point your backup snapshot was made trying to get the datadir running in another server is about your only option. (Which for maximum possibility of recovery should be as much like the original as possible in MySQL version and other environmental data).
If you do have active binlogs, look at this manual
|
Okay, heres a easy one for all you mysql folks out there:
Our Win 2003 server crashed last night (durring installing of windows updates). We were able to restore old data from backup, but we miss 22 hours of data. We cannot start windows and therefore not access the data through mySql admin which is the program I use normally to make backup. We have however been able to copy all data to external harddisk.
How do I access these data usning MySql Query browser and MySql Administrator tool?
|
access mySql database on file basis
|
Dropbox is available for Linux.
You could also investigate unison.
|
I am developing a webapp that will be used on LAN mostly. I have different locations where I deployed this app. Some of the locations run windows and some run linux (no x-window system). I need to know if there is a software out there that could easily synchronize my files stored somehere in the cloud (the clouding service can be provided by the app developers or to use different clouds) on both linux and windows machines. My english is a bit rusty so i'm going to explain this in simple words.
I will work on my local machine. I want to upload the files somewhere on the cloud and the clients installed on the LAN servers should synchronize the files. The client must be available for linux under console (as a daemon if possible) while on windows it can be something like dropbox or ubuntu one.
Does somebody know of such an app?
|
Cross-platform File sync tool
|
1
@Boyd: Import the dump file locally using
mysqldump -d -h localhost -u root -p thedatabase > dumpfile.sql
Then export just the structure using
mysqldump -u username -p --no-data thedatabase > newdumpfile.sql
Share
Improve this answer
Follow
answered Dec 19, 2010 at 23:47
stealthyninjastealthyninja
10.3k1111 gold badges5353 silver badges6060 bronze badges
Add a comment
|
|
I've got an .sql file, it's an database backup from four years ago. This .sql file is filled with table creations but also data dumps. Because I actually don't need and want the data I'm looking for an way to extract all the data dumps from the .sql file. So that I'm only restoring the tables architecture.
I think the .sql file was generated by cPanel backup service.
Is there some automated way of doing this? I can't do it by hand because the .sql file has an enormous amount of lines.
|
Extract data dumps from .sql file
|
1
The suggested solution with mountable file systems are barely usable though. And that is exactly why you need rdiff on the server, as it makes delta calculations and optimizes the throughput by sending only information needed as a result. Otherwise why even bother using rdiff at all ...
Share
Improve this answer
Follow
answered May 1, 2011 at 6:13
SPDenverSPDenver
43455 silver badges99 bronze badges
Add a comment
|
|
I am newbie in working with rdiff.
I am taking the backup using rdiff from client to server end.
can anyone tell me why we need to install rdiff backup at server end as well?
hwo it works?
rdiff access file systems of connect to the rdiff on server?
|
why rdiff backup needs to be installed on server end as well?
|
Two things:
Did you try with "New Configuration" option while restoring? I believe the problem is related to the users/groups added to the site and those users do not exist in new environment!
Also can you try restore using PowerShell with -Force switch parameter and see if that is successfull?
|
We made a backup of a web application through the central administration to move it to a different server on a different domain and it's a domain controller actually.
So we made a restore operation on the destination server from the central administration but never managed to succeed.
with errors like: Object failed in event OnRestore. For more information, see the spbackup.log or sprestore.log file located in the backup directory.SPException: The specified user or domain group was not found.
I tried every user account possible with no success. any clues?
|
Restore sharepoint 2010 web application on different domain
|
Try
7z a %DATE:~-4%-%DATE:~4,2%-%DATE:~7,2%.7z *.* for (YYYY-MM-DD)
or
7z a %DATE:~7,2%-%DATE:~4,2%-%DATE:~-4%.7z *.* for (DD-MM-YYYY)
(*.* is the mask for the files to back up)
|
I'm trying to setup 7zip for automated backups but I'm having trouble with output file names.
I tried using the %date% command but it just made 2 directories within my backup.
C:\Users\Desktop\Sun 11\07\2010.7z
How can I make it just log the day and month?
C:\Users\Desktop\Sun 11-07-2010.7z
|
How can I output weekday and month in batch (log files)?
|
You can do either a full dump or an incremental. The chief problem with an incremental is that several backups are needed to reconstruct the data. A full dump is a stand alone all-inclusive data set.
Which one you choose depends on several factors:
how big the dumps are
how fast the repository changes
how reliable the repository storage system is
availability requirement for the system during backup
how long an outage is tolerable in case the repository dies
how conscientiously the backups are done
probably several other factors (it's late)
|
I'm a subversion noob. I was plannig to take backups from the repo using this command:
svnadmin dump C:\Repositories\Dev > D:\backups\repo_dev.bak
My intention is to put this into Scheduled Task (Windows Server 2008) and run this on daily or weekly basis. Can I use this command only (old backup is replaced by the new one) or do I need incremental backups separately? Or hotcopy backups?
|
Subversion: dump backup, do I need incremental backups?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.