Response
stringlengths 8
2k
| Instruction
stringlengths 18
2k
| Prompt
stringlengths 14
160
|
---|---|---|
For wifi sync you can use synchronous BSD sockets. Through which you can do your data sync.
|
For an iPhone app I need a web access/server to sync the Core Data database with the computer. So I can backup and replace it. But how can I create a port or something like this which can be called with the Mac or PC?
|
Create WiFi Sync for iPhone app
|
A number of online storage services provide 1-2 GB space for free. Several of those have command-line clients. E.g. SpiderOak that I use has a client that can run in a headless (non-GUI) mode to upload files, and there's even a way to download files from it by wget or curl.
You just set up things in GUI mode, then put files into the configured directory and run SpiderOak with right options; files get uploaded. Then you either download ('restore') all or some of the files via another SpiderOak call or get them via HTTP.
About the same applies to Dropbox, but I have no experience with that.
|
I require a small space online (free) where I can
upload/download few files automatically using a script.
Space requirement is around 50 MB.
This should be such that it could be automated so I can set
it to run without manual interaction i.e. No GUI
I have a dynamic ip & have no tech on setting up a server.
Any help would be appreciated. Thanks.
|
online space to store files using commandline
|
0
I think that the best way is using mysqldump.
Normally I create a cron task to run in a time of little traffic, it generate a dump naming with a timestamp_databasename_enviroment.sql, so it verify if there are old backups and compact it.
I think that is a good form to do database backups.
Share
Improve this answer
Follow
answered Dec 8, 2010 at 20:03
GodFatherGodFather
3,09644 gold badges2626 silver badges3737 bronze badges
2
If you do it in a time of low traffic, the impact is small, or even imperceptible. I have a application with 4.000 use for hour average and I user this scheme to backup my database. normally I run it on dawn.
– GodFather
Dec 9, 2010 at 21:57
You needs watch disc space, because as time the backups size will be increased
– GodFather
Dec 9, 2010 at 22:07
Add a comment
|
|
I was wondering whats the best way to backup MySQL (v5.1.x) data -
creating mysql data dir archive
use mysqldump
What are the pro/cons for the above? I am guessing mysqldump has some performance impact on a live database. How much impact are we talking about?
We plan to take a backup every few hours, lets say 4 hrs. Whats the best practice around MySQL backups or database backups in general.
|
Mysql backup strategy
|
0
First of all if you want to enable remote connection to your server (I'm not sure if this is what you're after), Try this: http://support.microsoft.com/kb/914277 . You will also want to make sure the mixed authentication option is enabled.
Share
Improve this answer
Follow
answered Nov 16, 2010 at 14:09
Slime recipeSlime recipe
2,25333 gold badges3333 silver badges4949 bronze badges
Add a comment
|
|
I recently found this link which shows a great example of backing up and restoring a sql server database. However, my SQL Server only uses WindowsAuthentication and so it does not really require a UserName and Password. To be able to do this, I turned the line srvConn.LoginSecure = false; into srvConn.LoginSecure = true;
I was expecting to connect succesfully to the Server. However, this example returns an exception saying that it was unable to connect to the server.
Can anybody help me please? I have to learn this application for me to be able to apply the same concept to a project i'm working on. Thank you very much.
|
Restore a Database Programatically (SQL Server)
|
0
#!/bin/sh
DATE=`/bin/date +%y%m%e`
cd /path/to/your/folder
for folder in *; do
if [ -d $folder ]; then
tar -cvzf $folder-$DATE.tar.gz $folder
fi
done
Share
Improve this answer
Follow
edited Sep 28, 2010 at 20:36
Dennis Williamson
353k9292 gold badges376376 silver badges441441 bronze badges
answered Sep 28, 2010 at 20:18
mezziemezzie
1,29688 silver badges1414 bronze badges
Add a comment
|
|
How could I read the content of a parent folder and if sub-folders are found, then make a tar.gz files of those subfolders found. However, I know that subfolders will have the following filename format: name-1.2.3 What I want is to create a tar.gz file that looks like: name-1.2.3-20100928.tar.gz Any help will be appreciated.
|
bash find subfolder and backup
|
0
Once solution would be to use a version control system, such as Subversion, Mercurial, or Git. Depending on the types of files (binary or text), some systems work better than others.
Share
Improve this answer
Follow
answered Sep 14, 2011 at 15:49
cdeszaqcdeszaq
31.1k2727 gold badges120120 silver badges175175 bronze badges
Add a comment
|
|
I've been thinking about a model for saving snapshots of a windows filesystem. Obviously you only want to backup new files or files that have changed - for stuff that hasn't changed you don't want to make another copy. rshapshot http://www.rsnapshot.org/ (for linux) accomplishes this by creating a new snapshot directory for each save point and hardlinking to unchanged files.
Windows doesn't really have hard and soft/symbolic links as far as I understand, although it has shortcuts(?). What would be the equivalent link structure in Windows? Would such a versioning model work? Or would a different approach be better, such as storing the versioned backups in some kind of database? I notice that SyncBackSE http://www.2brightsparks.com/syncback/sbse-features.html has versioning - any idea how this is implemented?
Thanks
Edit: I've now had a look at SyncBackSE: the versioning feature does not mean a snapshot view - it's simply keeping old copies of a file with a prepended time stamp.
|
Model for versioned backups on MS Windows
|
Are you positive it is only the big table with blobs? Try running the dump sans that table. Do that table individually and if it still gets stuck, break it up.
Create the inserts into 3-4 groups and see if any go through. Process of elimination will help narrow down if theres a row specific issue (I.e. corrupted data?) or if mysql is simply taking a while to write.
I'd advise opening up a second mysql shell or using phpmyadmin to refresh the table view and see if new records are being written. MySQL isn't verbose on its dumps. It may simply be taking a while to load in all the inserts.
|
I have a with mysqldumb created backup file. It's about 15GB and contains a lot of blobs. Max size per blob is 30MB.
mysqldump -uuser -ppass --compress --quick --skip-opt supertext > supertext.sql
Now when I try to restore the backup, the process just gets stuck.
mysql -uuser -ppass dev_supertext < supertext.sql
It get stuck while writing back the biggest table with the blobs. There is no error message and mysql is still running fine.
This is on a 64bit 5.1.48 community edition for Windows server.
max_allowed_packet is set to 40MB and is not the problem. I had that before.
Any other settings I could check or something I can monitor during the restore?
Didn't see anything special in the query or error log. Maybe there is a timeout?
Just FYI:
I've already posted this question in the MySQL Forum, but got no response.
http://forums.mysql.com/read.php?28,377143
Thanks for any tips.
|
Restore of MySQL Backup just stuck
|
0
Third parameter is bad, should be:
objDL.BackupTables(220, i, 1)
Share
Improve this answer
Follow
answered Apr 28, 2010 at 14:12
wassertimwassertim
3,12622 gold badges2424 silver badges3939 bronze badges
3
yes thats not a problem...i am actually using other values. these were only for examples do you know what the actual issue might be?
– K-M
Apr 28, 2010 at 14:13
If you give us your DL code - it might help. What exception it throws?
– wassertim
Apr 28, 2010 at 14:30
i did more debugging and i think the loop is not working because i have a dynamic SQL in my SP therefore it maybe cannot read it well. Any suggestions on how i can make the SP work pleasE?
– K-M
Apr 28, 2010 at 15:12
Add a comment
|
|
I am trying to run a for loop for a backup system and inside that i want to run a SP that will loop. Below is the code that does not work for me..
Any ideas please?
Dim TotalTables As Integer
Dim i As Integer
TotalTables = 10
For i = 1 To TotalTables
objDL.BackupTables(220, i, 001) ' (This is a method from the DL and the 3 parameters are integars)
Next
I tried the SP and it works perfectly in SQLServer
|
Loop Stored Procedure in VB.Net (win form)
|
0
There are many ways to do this. It really depends on how complicated your "database" is.
The simplest solution is to write to a text file in a CSV format:
import java.io.PrintWriter;
import java.io.FileOutputStream;
import java.io.FileNotFoundException;
import java.io.IOException;
public class FileOutput {
public static void main(String[] args) {
File file = new File("C:\\MyFile.csv");
FileOutputStream fis = null;
PrintWriter output = null;
try {
fos = new FileOutputStream(file);
output = new PrintWriter(fos);
output.println("Column A, Column B, Column C");
// dispose all the resources after using them.
outputStream.flush();
fos.close();
outputStream.close();
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
}
}
Or, if you're looking for an XML solution, you can play with Xerces API, which I think is included in the latest JDK, so you just have to include the packages.
Share
Improve this answer
Follow
answered Apr 6, 2010 at 20:58
aduricaduric
2,85244 gold badges2222 silver badges1616 bronze badges
1
1
thanx for reply my concern is far more huge i think
– jawath
Apr 7, 2010 at 6:04
Add a comment
|
|
How do I backup / restore any kind of databases inside my java application to flate files.Are there any tools framework available to backup database to flat file like CSV, XML, or secure encrypted file, or restore from csv or xml files to databases, it should be also capable of dumping table vise restore and backup also.
|
java database backup and restore
|
I used the verify command and I cannot find some of the revisions. When I delete it and run the svn hotcopy command again then they verify output now includes the missing revisions. I guess that means that it does not overwrite them. I cannot seem to find a flag that will let me overwrite the files. If no one else offers up an answer I will have to pick this one as being the right answer.
|
I have used SVN hotcopy to make a backup of the repository every weekday. When I looked at the Windows Scheduled Task logs I see that the job has run successfully. When I do a svnadmin verify I don't seem to only have a subset of the revisions. Do I need to delete the files first or is there an overwrite existing flag. svnadmin help hotcopy revealed nothing.
|
Does svn hotcopy overwrite existing files?
|
0
I can't think of any option in mysqldump that would skip the empty tables in your backup. Maybe the -where option but not sure you can do sth generic. IMHO a post-treatment in a second script is not that bad.
Share
Improve this answer
Follow
answered Feb 26, 2010 at 16:27
pierrozpierroz
7,76399 gold badges4949 silver badges6060 bronze badges
Add a comment
|
|
I have multiple large MySQL backup files all from different DBs and having different schemas. I want to load the backups into our EDW but I don't want to load the empty tables.
Right now I'm cutting out the empty tables using AWK on the backup files, but I'm wondering if there's a better way to do this.
If anyone is interested, this is my AWK script:
EDIT: I noticed today that this script has some problems, please beware if you want to actually try to use it. Your output may be WRONG... I will post my changes as I make them.
# File: remove_empty_tables.awk
# Copyright (c) Northwestern University, 2010
# http://edw.northwestern.edu
/^--$/ {
i = 0;
line[++i] = $0; getline
if ($0 ~ /-- Definition/) {
inserts = 0;
while ($0 !~ / ALTER TABLE .* ENABLE KEYS /) {
# If we already have an insert:
if (inserts > 0)
print
else {
# If we found an INSERT statement, the table is NOT empty:
if ($0 ~ /^INSERT /) {
++inserts
# Dump the lines before the INSERT and then the INSERT:
for (j = 1; j <= i; ++j) print line[j]
i = 0
print $0
}
# Otherwise we may yet find an insert, so save the line:
else line[++i] = $0
}
getline # go to the next line
}
line[++i] = $0; getline
line[++i] = $0; getline
if (inserts > 0) {
for (j = 1; j <= i; ++j) print line[j]
print $0
}
next
} else {
print "--"
}
}
{
print
}
|
How to remove empty tables from a MySQL backup file
|
I believe the only concern is that the Log Sequence Number (LSN) chain is unbroken. This will start with your full backup, but can also have as many subsequent transaction log backups as you need. You mentioned the database will be offline until the log shipping is configured, so you shouldn't have an issue with transactions building up on the primary until the backup finishes copy/restore. However, if you wanted to bring the primary online, you might have to take frequent transaction log backups to avoid running out of space as the transactions build up (depending on usage).
This is okay, because you could easily copy those transaction logs over to the secondary, restore them, then enable log shipping. As long as all the backups taken on the primary have been restored on the secondary, the LSN chain is maintained, so the first log backup that is shipped over should restore correctly. Time doesn't matter in this case.
|
We have a new filestream database that will be initially loaded with 65GB data, for which we'd like to configure log shipping to a remote (different continent) location.
For the initial setup of log shipping, is there any threshold for the time between the backup of the primary and it's restore onto the secondary? The new database will essentially be offline until we have the log shipping configured. Due to the size of the database, it may be some time (days) between the database initially being backed-up and then restored on the target. Will this be a problem?
|
Initial configuration for log shipping of a large database in SQL Server 2008
|
You can start SQL profiler, then carry out the actions in Management Studio to display the information you require, then look in the profiler output to see what has been executed in the background to get you that info.
My guess is that these tables/catalogs are being queried:
sys.backup_devices
|
In SQL Server Management Studio (I have 2008), I can see the contents of the media i have backed up to, be it disk or tape. I can see information such as what files it currently includes, the dates they were backed up, etc... Is there a way to do this in T-SQL? I would like to specify a device (which is linked to a file location) and query it for its contents. Any thoughts?
|
Using T-SQL to get contents of backup media
|
0
DPM comes as a part of System Center and it makes use of VSS writers. In other words it is dependent on VSS service. You will face issues if VSS fails, DPM backup wont work. It also makes use of application writers for application specific backups(like MS SQL DB and Exchange Mailbox)
DPM is a complete backup solution of almost all Microsoft platforms be it VMS, Sharepoint,SQL DB, MAILBOXES, FILES. It also has backup to Azure and Backup to tapes to meet short term and Long term goals.
Share
Improve this answer
Follow
answered Mar 22, 2016 at 18:11
Gagan SinghGagan Singh
5111 silver badge22 bronze badges
Add a comment
|
|
trying to implement a VSS backup solution similar to what is described here:
Volume Shadow Copy (VSS)
there is a new offering called data protection manager:
http://www.microsoft.com/systemcenter/dataprotectionmanager/en/us/overview.aspx
how different is this from VSS based solution?
does it solve the problem of implementing a VSS writer?
|
volume shadow copy vs data protection manager
|
0
Hmm this is quite difficult to understand. Sounds like you should make a new MySQL table containing snapshots of the calculations and the time they were saved?
What kind of data are you looking to snapshot?
Share
Improve this answer
Follow
answered Nov 27, 2009 at 9:48
GausieGausie
4,34111 gold badge2626 silver badges3737 bronze badges
6
This was now first thought, but the original database is very large, and spread over a few tables, so would be a pain to do t this way.
– jimbo
Nov 27, 2009 at 10:05
An actual image, (jpeg or the like) would do of the dynamic page, and attached this to the calculation, that way if the administrator needed to revert back they could.
– jimbo
Nov 27, 2009 at 10:11
1
Right, I've grabbed the class from this site (phpclasses.org/browse/package/4608.html). You can find it here: pastebin.ca/1689336. You can use that PHP class to save a screenshot of the page as a JPG
– Gausie
Nov 27, 2009 at 10:25
I think this would have been a good call, but we are running on a linux server...
– jimbo
Nov 27, 2009 at 15:08
Do you have access to the server? Can you use php's exec() with no problem? If so, give this a go mysql-apache-php.com/website_screenshot.htm
– Gausie
Nov 27, 2009 at 15:26
|
Show 1 more comment
|
We have a web application that creates a dynamic PHP page with all the MySQL stored details a user has entered via a number a forms. So far so good, but we want this information stored some how to be refereed to at a later date, as an administrator can make changes to the data, which reflects on calculations that are worked out from this saved data.
When going back over this saved data we need to be able to see all the information submitted for that particular calculation, so if that data has changed we will see what is was relating to that calculation. Now we have thought about maybe a snapshot when the calculation is done, pdf of the webpage or something similar would do, but is this simple to do?
I hope this makes sense...
|
How to create a snapshot or clone of PHP, MySQL page... Inspiration needed
|
0
By default it will lock all tables; this stops anything being updated.
If you are using transactional engines exclusively (InnoDB) then you probably want to use --lock-tables=0 and --single-transaction
This will use a MVCC snapshot (effectively).
Please post exactly what error you're getting, as well as the command you're using.
Share
Improve this answer
Follow
answered Nov 18, 2009 at 17:14
MarkRMarkR
63k1515 gold badges118118 silver badges152152 bronze badges
1
I edited my post with the command i'm using. I'm not getting any error. the file is too small and corrupt. sometimes the backup does work with the same command
– MichaelD
Nov 18, 2009 at 17:21
Add a comment
|
|
When i try to get a dump of a mysql database, the dump stops when a row in it is updated. How can i prevent that? I already tried following options with no result:
-f (forces continu even when error)
-x (lock all tables)
when i log any error, i get nothing
Command i'm using:
mysqldump --user=* --password=* --all-databases --log-error=*.log | gzip > *.gz
|
mysql dump stops when a row is updated
|
0
I don't see why you can't just do:
cat mydb-3Wed.sql-* | /usr/local/mysql_versions/mysql-5.0.27/bin/mysql --defaults-file=/usr/local/mysql_versions/mysql-5.0.27/my.cnf --user=myuser --password=mypw -D mydb
The * globbing should provide the files in the sorted order, check with ls mydb-3Wed.sql-* that they actually are though.
Share
Improve this answer
Follow
edited Aug 19, 2009 at 17:06
answered Aug 19, 2009 at 16:49
nosnos
226k5858 gold badges422422 silver badges511511 bronze badges
2
Good to see proper basic shell knowledge applied.
– user140327
Aug 19, 2009 at 16:57
Yup you're right, sorry for such a basic question. Thanks for replying, Simon B
– Simon B
Aug 19, 2009 at 20:57
Add a comment
|
|
Hi my database has started to go over 2GB in backed up size, so I'm looking at options for splitting the file and then reassembling it to restore the database.
I've got a series of files from doing the following backup shell file:
DATE_STRING=`date +%u%a`
BACKUP_DIR=/home/myhome/backups
/usr/local/mysql_versions/mysql-5.0.27/bin/mysqldump --defaults-file=/usr/local/mysql_versions/mysql-5.0.27/my.cnf
--user=myuser
--password=mypw
--add-drop-table
--single-transaction
mydb |
split -b 100000000 - rank-$DATE_STRING.sql-;
this prodes a sequence of files like:
mydb-3Wed.sql-aa
mydb-3Wed.sql-ab
mydb-3Wed.sql-ac
...
my question is what is the corresponding sequence of commands that I need to use for linux to do the restore?
Previously I was using this command:
/usr/local/mysql_versions/mysql-5.0.27/bin/mysql
--defaults-file=/usr/local/mysql_versions/mysql-5.0.27/my.cnf
--user=myuser
--password=mypw
-D mydb < the_old_big_dbdump.sql
Any suggestions even if they don't involve split / cat would be greatly appreciated
|
restoring mysql db from the contents of split up mysqldump
|
Are you following these instructions on migrating the SSP?
|
I've used the central admin backup facility to backup our Shared Services Provider. The backup location was a drive on a new server.
I then try to restore the SSP via central admin on the new server. It fails with an error relating to the fact that it cant find the .mdf files that it requires. It is looking in the location that they were on the original server.
Does the backup not take care of moving these .mdf files as part of the backup restore process?
Would appreciate anyone's suggestions.
|
Backup and Restore SSP on MOSS 2007 fails due to missing .mdf files
|
0
If you want to avoid disruption to your site, the best method is to setup a replication slave to your server, and take backups from that. Using this, the method of backup becomes irrelevant, as the slave will simply catch up with the master when the replication is finished, and there will be no disruption to the operation of the master.
You can also setup your application to use the slave for read queries, to reduce load on the master.
Share
Improve this answer
Follow
answered Dec 28, 2008 at 10:49
Gary PendergastGary Pendergast
1411111 bronze badges
1
Hey, Gary! You might rember me after a mishap last year that happened when our admin tried to restart replication and killed the master instead :) This sounds like a great idea in general, but the dang replication was breaking so often that I gave up on it completely.
– deadprogrammer
Jan 16, 2009 at 22:45
Add a comment
|
|
Are there major advantages to InnoDB hot backup vs ZRM snapshots in terms of disruption to the running site, the size of compressed backup files, and speed of backup/restore on a medium-sized to largish all-InnoDB database?
My understanding is that InnoDB's approach is more reliable, faster, does not cause a significant outage when running, etc.
|
ZRM snapshot vs InnoDB hot backup for MySQL
|
-1
Linking the same question asked on MSDN: Azure SQL Database Backup Fails, cannot connect to the database.
Please see the Requirements and Restrictions details where this functionality is not supported, which I have listed below the applicable items that apply to your scenario:
The Backup and Restore feature requires the App Service plan to be in the Standard tier or Premium tier. For more information about scaling your App Service plan to use a higher tier, see Scale up an app in Azure. Premium tier allows a greater number of daily back ups than Standard tier.
You need an Azure storage account and container in the same subscription as the app that you want to back up. For more information on Azure storage accounts, see Azure storage account overview.
Backups can be up to 10 GB of app and database content. If the backup size exceeds this limit, you get an error.
Using a firewall enabled storage account as the destination for your backups is not supported. If a backup is configured, you will get failed backups.
If none of the above apply to you, then the issue is an IP Address issue in that you need to enable "Allow access to Azure services" in the firewall for your Azure SQL (logical) Server.
Additional troubleshooting can be performed by leveraging Application Insights to capture the backup failure event and then drill into the collected log detail to see what the specific error is.
Share
Improve this answer
Follow
answered Apr 30, 2020 at 23:52
Mike UbezziMike Ubezzi
1,00766 silver badges88 bronze badges
Add a comment
|
|
I am experiencing a problem configuring the backup of an SQL database using Azure.
I have web application and an associated Azure SQL database. The app connects to the DB no problem. I have pasted the connection string provided to me by the Azure UI (Home -> SQL Databases -> My SQL Database) into the connection strings section of the configuration for the App Service (Home -> App Services -> My App Service -> Configuration). I created a backup of the App Service (Home -> App Services -> My App Service -> Backups -> Configuration) and ticked my connection string to be back up my database.
After about 20 minutes, the backup fails with the error:
A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP Provider, error: 0 - No such host is known.)
I can connect to the database from the SQL Server Management Studio running on my laptop, and from code running on my laptop, using the server, username and password from the connection string, why can the backup not connect to the database?
Many thanks for any advice.
|
Why can't my Azure app service backup connect to my Azure SQL database?
|
-1
I have finished such a script recently, running it for influxdb and mongodb hourly by jenkins.
There are two folders to store the backups.
One named "hourly_backups" stores the latest 24 hourly backups.
Another one named "daily_backups" holds the earliest hourly backup of last 7 days.
After backup is successfully finished, it will clean the old files to control the total size.
It sounds easy but must be careful.
I agreed that bash better for use. But you can also do it in python as a training.
People who has questions about the backups scripts of these two dbs can discuss it with me.
Share
Improve this answer
Follow
answered Sep 18, 2019 at 7:13
qloveshmilyqloveshmily
1,0251010 silver badges55 bronze badges
1
can you share a github link
– Charles
Jan 2, 2020 at 13:09
Add a comment
|
|
I'm looking for a way to automate the backup of my InfluxDB databases via Python.
Did anyone has done it yet?
Or it doesn't make any sense to do it via Python and I should just stick to a bash script (like this one: https://gist.github.com/opHASnoNAME/7b367abfbba8b34f3591842db8814a8f)?
|
Python script to automate influxdb backups
|
0
Is your find d1 d2 -print0 including the "." directory in its output from each d1 and d2, meaning it doubles the files? Try adding --no-recursion to the tar.
Share
Improve this answer
Follow
answered Aug 2, 2023 at 11:11
NealNeal
13711 silver badge88 bronze badges
Add a comment
|
|
I'm trying to back up a bunch of directories with tar and using find to get the files. I've seen this solution elsewhere in an old post but it duplicates every file and directory in the tarball; find itself doesn't duplicate anything
find d1 d2 -print0 | tar -czvf backup.tar.gz --null -T -
Using Ubuntu 18.04 LTS, Gnu find 4.7.0 and Gnu tar 1.29
I can just give the directories to tar, but curious why this behaviour is happening.
|
Duplicate files with find and tar
|
-1
I'm retracting my initial "It can't be done" response - it should be possible by using a series of plays, but it's not very pretty.
If you really need the backup file to keep the time-stamp, you might want to put in an official request on the developer mailing list.
Use the stat module on the initial file to retrieve the file timestamp
Register the backup file name in the return value backup_file from the file or copy module.
Use the command module to call the touch command to set the time of the backup_file to the original time. (The Ansible stat module does not adjust file timestamps.)
Share
Improve this answer
Follow
edited Jul 28, 2017 at 12:50
answered Jul 2, 2017 at 14:43
dan_linderdan_linder
90111 gold badge1010 silver badges3030 bronze badges
Add a comment
|
|
How to take backup of file without changing its time-stamp with Ansible playbook? I tried backup=yes but the problem is like it changes the timestamp os the file.
Code:- dest={{item}} state=absent regexp='TLSv1' backup=yes with_items: ('{{certs_dir.stdout_lines}}')
|
How to take backup of file without changing its time-stamp with Ansible playbook
|
[Answer]
C:\Program Files\PostgreSQL\9.3\bin> pg_dump -h "jumbo.db.elephantsql.com" -U "hytxlzju" -p "5432" --verbose --role "hytxlzju" --format
p --encoding "SQL_ASCII" "hytxlzju" > "C:\Program Files\PostgreSQL\9.3\bin\extremeBlueDB.sql"
|
I am using the pg_dump command in the following way, on my Windows machine:
pg_dump -h "jumbo.db.elephantsql.com" -U "hytxlzju" -p "5432" -f "ebDumping.sql" --verbose "hytxlzju" > "C:\Program Files\PostgreSQL\9.3\bin\extremeBlueDB.sql"
I don't get any logs and I cannot see the file being created at the specified location. Any idea?
|
pg_dump halting on WIndows
|
0
My solution is:
yum -y install duplicity rsync gpg python python-devel python-pip
pip install --upgrade pip==9.0.3
pip install duplicity
If have an error like Unable to get SCM version: No module named setuptools_scm do this:
pip install -U pip setuptools
pip install -U pip setuptools_scm
yum install librsync-devel
pip install duplicity
Share
Improve this answer
Follow
answered Apr 4, 2023 at 14:39
Ivan VaninIvan Vanin
111 bronze badge
Add a comment
|
|
I have installed Duplicity on some AWS EC2 instances using the folowing command
yum -y install duplicity rsync gpg python python-devel python-pip -- enablerepo=epel
This was based on an approach described here
https://rtcamp.com/tutorials/backups/duplicity-amazon-s3/
However, whenever I try to run a duplictiy command, I get the following command
Traceback (most recent call last): File "/usr/bin/duplicity", line 42, in <module> from duplicity import log ImportError: No module named duplicity
Anyone have any ideas on how to solve this?
|
Error when running Duplicity on AWS Linux
|
0
consider using --allow-non-empty if the repository's revisions are known to mirror their respective revisions in the source repository
Share
Improve this answer
Follow
answered May 6, 2014 at 19:49
devmandevman
1
Add a comment
|
|
I have a subversion repository with content in it and I want svnsync it to a remote server. The issue is I am getting an error message saying svnsync: Cannot initialize a repository with content in it when try to initialize svnsync.
svnsync init file:///var/www/svn/project_z/ http://svn.mysvn.com/svn/project_z/
How do I svnsync a repository with content in them?
|
svnsync: Cannot initialize a repository with content in it
|
The solution by David Walsh looks like what you want:
http://davidwalsh.name/backup-mysql-database-php
A php script that retrieves the tables in a database and saves the data in a .sql file.
|
I want to create a cronjob for making a backup (sql dump) from my database and e-mail it to me. Setting up the cronjob and stuff works great and I'm able to use parts of my zend application :)
Unfortunately I cannot use exec() or system() on my server so now I'm looking for a way to get the same result. I searched everywhere with all possible descriptions I could think of, but without any results.
So in short:
I want to backup my databaseup
Preferably in .sql format (like export in phpmyadmin)
Using the Zend framework (so I can use my already loaded
application.ini settings for the database)
I cannot use exec() or system()
I'm completely stuck so really anything would help! Thanks in advance!
|
Zend export database for backup
|
-1
Partially Succeeded means that there were likely some files which could not be backed up, because they were locked for some reason. When this happens the backup process skips them and backs up the rest of the site and database if configured. You should be able to see which files were skipped in the log file. If for some reason you do not need these files backed up you can skip them by following the instructions in section “Backup just part of your app” here.
Stopping Locked On-Demand/Triggered Azure Webjob sometimes is the reason for a Partially Succeeded backup status
Share
Improve this answer
Follow
answered May 21, 2019 at 14:17
SnehaAgrawal-MSFTSnehaAgrawal-MSFT
22911 gold badge55 silver badges99 bronze badges
1
1
Editing opening question. It is still not clear why the in-app database was not backed up even though the website was manually stopped and is not servicing requests. I need the complete backup!
– Snowy
May 30, 2019 at 11:59
Add a comment
|
|
App Service on Standard plan, using MySQL in-app database. App is stopped, and a manual backup always completes as "partial". The configuration for the backup blade shows no database exists. I am concerned that the database in the filesystem is not being included, so the restore will fail.
How can I be confident in Azure App Service Backup?
Thanks.
Added Information: Backup Log
CorrelationId: 19a70ee5-7158-49e9-8f58-35e39f231a34
Creating temp folder.
Retrieve site meta-data.
Backing up the databases.
Failed to backup in-app database. Skipping in-app database backup.
Backing up site content and uploading to the blob...
Uploading metadata to the blob.
|
Azure App Service Backup Partial for Wordpress?
|
Turns out the problem was that to use RegLoadKeyW() The loaded hive needs to be somewhere writable. Since the shadow copy is read-only, it failed.
When I copied the mounted file outside the shadow copy it worked fine.
|
This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.
Closed 11 years ago.
I'm creating a shadow copy and I want to mount a registry hive from that shadow copy using RegLoadKey() so I go over its content using the normal registry functions.
This usually works well except in certain machines where it doesn't work at all.
I create the shadow copy and get its mount point - something like
\\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy8
I then call
RegLoadKeyW(HKEY_LOCAL_MACHINE, "\\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy8\Windows\System32\config\SOFTWARE", "mntpoint");
This call returns 1009 - The configuration registry database is corrupt.
If I use CreateFileW() I can open this file successfully using that path so it is definitely there.
I've made sure that the shadow copy is created with the registry writer so I don't think that's the issue.
There's no difference if I create the shadow copy and try this after a reboot.
This only happens on some machines. on most it works just fine. I'm not sure what differentiates the machines it doesn't work on.
The machine is a windows 2008 64-bit.
|
RegLoadKey to a hive file from within a Shadow copy [closed]
|
The first thing to understand is why that setEqual method can't work: you need to know how identifiers work. (Reading that link should be very helpful.) For a quick rundown with probably too much terminology: in your function, the parameter restore is bound to an object, and you are merely re-binding that identifier with the = operator. Here are some examples of binding the identifier restore to things.
# Bind the identifier `restore` to the number object 1.
restore = 1
# Bind the identifier `restore` to the string object 'Some string.'
# The original object that `restore` was bound to is unaffected.
restore = 'Some string.'
So, in your function, when you say:
restore = []
You are actually binding restore to a new list object you're creating. Because Python has function-local scoping, restore in your example is binding the function-local identifier restore to the new list. This will not change anything you're passing in to setEqual as restore. For example,
test_variable = 1
setEqual(test_variable, [1, 2, 3, 4])
# Passes, because the identifier test_variable
# CAN'T be rebound within this scope from setEqual.
assert test_variable == 1
Simplifying a bit, you can only bind identifiers in the currently executing scope -- you can never write a function like restore0 that affects the scope outside of that function. As @Ignacio says, you can use something like a copy function to rebind the identifier in the current scope:
restore1
|
Specifically, I want to create a backup of a list, then make some changes to that list, append all the changes to a third list, but then reset the first list with the backup before making further changes, etc, until I'm finished making changes and want to copy back all the content in the third list to the first one. Unfortunately, it seems that whenever I make changes to the first list in another function, the backup gets changed also. Using original = backup didn't work too well; nor did using
def setEqual(restore, backup):
restore = []
for number in backup:
restore.append(number)
solve my problem; even though I successfully restored the list from the backup, the backup nevertheless changed whenever I changed the original list.
How would I go about solving this problem?
|
How do I copy only the values and not the references from a Python list?
|
Docker images save "diffs" every commit you make, along with all previous versions of your image. This means that your process is guaranteed to steadily increase your final image size. To avoid this, instead of using docker save, you should use docker export instead (which receives the container as an argument, instead of an image).
Docker save
Docker export
|
right now I'm experimenting with Docker.
I have "installed" a Docker Debian Image.
In this Debian Environment, I installed a application.
Now, I do every night this:
docker stop (stop container)
docker commit (make container to an
image)
docker save (to save the image to an file, I do this to backup
the current work)
docker run (to start the container)
So as you can see, I try to create a "backup-solution".
It works perfectly. But the problem is, that the backup-files are getting bigger and bigger by every day.
Even without touchting the container for multiple days.
It used to be well under 10 gigabytes, but now it's already 20.
Why is that? What am I doing wrong?
Thanks a lot
|
Docker image is getting bigger and bigger
|
you can write a simple script (batch file) that only zips the types of files you want. suppose, you want to zip all your .aspx, .cs, .config, .xsd files only,
suppose you have winrar installed in your c:\program files\winrar
and your project is in c:\project\MyBigProject
then, just open your notepad, copy paste this, and save as "script.bat" (dont forget the double quotes while saving, you need to save it with double quotes, so that notepad saves as .bat extention instead of .txt)
so to backup (in other words, to zip wanted files) :
"C:\Program Files\WinRAR\rar" a -r0 -ed MyBackup.rar c:\project\MyBigProject\*.aspx,*.cs,*.xsd,*.config
the syntax is like this:
"path to winrar folder" 'switches' "Name of completed rar file" 'folder to zip'
just make sure the paths are in proper order. the "a -r0 -ed" parts are switches, and you can find out all about the switches here:
http://acritum.com/winrar/manual/index.html?html_helpswitches.htm
I use the "Ignore files" switch (http://acritum.com/winrar/manual/index.html?html_helpswxa.htm) to ignore all files and folders that I dont want (with wildcards) and My project is about 86 mb, when It gets compressed just with code files, it comes up to 6mb.
its the best way to do it really. If you need more help, please ask!
edit: Also, Look into SVN (its free!) - I use svn too, and its really helpfull. there is even a free tool called AnkhSvn to integrate into visual studio. its just fantastic!
|
I would like to make a backup of my project, but the folder now exceeds 6.6 GB mainly through extra libraries like boost etc, but even if I just select the folders with my sources, I end up with big files like: .ipch, .sdf and potentially others. To make matters worse I use eclipse to code and VS to compile, so that adds to the mess, although I have the impression that only VS creates big files.
In case shit hits the fan I would like to be able to unpack one archive, and have everything in there like the project settings and solution files, and the sources so that I can easily open it again in VS. I can live with having to re-download boost or other third party libs.
How do you tackle this problem and do I need to preserve file like .sdf?
Answer:
Thanks for all the tips. I will now adopt the solution proposed by LocustHorde because that seems to fit my needs best. I simply want one file that I can take offsite as a safe backup (and I don't want to use an online service). Storing all versions of all files doesn't seem to work towards smaller and simpler and it would be a bit overkill in this case, although I will look to install some version control system because I have no experience with them and I would like to get some...
Final Answer
After having a good look I found that dishing out which files had to be ignored by the winrar archiving was still a hassle. I finally ended up installing Git out of curiosity and liked it. So I now have some of my projects in a local repository. From eclipse I can easily mark files and directories for being ignored, and to make a backup I use git-extensions to clone the repo. I still need to look at purging old versions, which isn't very userfriendly in Git, but seems to be possible at least and then i will just 7zip the folder up. In the worst case I just delete the git database and I just have the last version of my source files. Or maybe I can checkout to another directory. We'll see.
|
Is there a sensible way to backup Visual Studio C++ solutions without all the "extra" files the IDE creates?
|
The safest option would be to revert your development environment to SQL 2005, as that way no matter what your code will be compatible with your hosting environment. You should be able to install a separate 2005 instance on your box, which should save time (have only one of 2005/2008 active at a time for performance reasons). To get this configured, you might have to uninstall 2008, then install a 2005 instance, then install a 2008 instance.
With regards to data, you might want to look into the BCP utility for copying data in and out of the database. Once you get the hang of it, it is pretty quick and convenient.
|
I only have SQL Server 2008 (Dev Edition) on my development machine
I only have SQL Server 2005 available with my hosting company (and I don't have direct connection access to this database)
I'm just wondering what the best approach is for:
Getting the initlal DB Structure & Data into production.
And keeping any structural changes/data changes in sync in future.
As far as I can see...
Replication - not an option cos I can't connect to the production DB.
Restoring a backup - not an option because as far as I can see, you cannot export a DB from 2008 that is restorable in 2005 (even with the 2008 DB set in 2005 compatibility mode) and it wouldn't make sense to be restoring production over the top of my dev version anyway.
Dump all the scripts from my 2008 Database, Revert my Dev to machine from 2008 -> 2005, and recreate the database from the scripts, then just use backup & restore to get the initial DB into production, then run scripts through the web panel from that point onwards
Dump all the scripts from my 2008 Database and generate the entire 2005 db from scripts in production. then run scripts through the web panel from that point onwards
With the last 2 options, I'd probably need to script all the data inserts as well using some tool (which I presume exists on the web)
Are there any other possibile solutions that I'm not considering.
|
What is the suggested approach to Syncing/Backing up/Restoring from SQL Server 2008 to SQL Server 2005
|
on this theme, the answers above have pointed me towards the following plugin:
http://www.genealogy-computer-tips.com/wp-database-backup/
This seems to be everything I needed! Cheers,
|
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
does anyone know of a good way to automatically backup databases used for wordpress blogs? Preferably a way of getting the backup emailed as a .zip file to the admin user so they can be stored remotely?
|
Wordpress database backup [closed]
|
It's not possible. Your app isn't even capable of reading the documents from other apps. This is accomplished via sandboxing. Every read/write your application tries to do to the filesystem is checked by the kernel to ensure you're staying within your sandbox. The documents belonging to other apps are outside of your sandbox, so you cannot see them.
|
I am currently coding a backup app for iOS, and I want to have options to let the user back up things like Application Data (other app's documents, etc,) Contacts, Safari Bookmarks, and all that fun stuff.
I'd like to know if that's possible, how I'd do it, and where those files are stored, and most importantly, if this is actually allowed by Apple. I read through their docs, and I haven't seen anything that speaks against it.
|
Reading Files belonging to other Apps iOS
|
4
If what you're wanting is a backup of the current state of the files themselves (and don't actually want the full version history), use svn export instead.
If you are trying to back up the history, then I concur with Rohith's answer.
Share
Improve this answer
Follow
answered Nov 6, 2009 at 12:47
AmberAmber
516k8484 gold badges632632 silver badges553553 bronze badges
Add a comment
|
|
I need to take the backup of one folder of my SVN repository. For this I have tried with svndump and svndumpfilter commands but of no use.
Can any one please explain how to do this with an example.
Update:
I have a repository in that I have one folder say "Test". Apart from this "test" there are some more folders/projects in my repository. If I want to take the full backup of my repository its consuming more memory (30 gb) so I want to shift only the "Test" folder with the history to another repository so that I can take the regular backups of only the "Test" folder (new repository) as it will take less memory. (I don't need to take the regular backups of other folders except "Test")
How can I do this?
|
To take the backup of required files of SVN repository
|
Is this an exercise your doing. If not, you should probably look at some of the production message queueing technologies (e.g. MSMQ for Windows) which supports persisting the queues on the disk and not just storing them in memory.
In terms of your requirements
1. has a backup on hard disk at realtime
Yes MSMQs can do that.
2. and can restore the backup
And that.
3. Can respond to massive enqueue/dequeue request
And this...
|
Please give me some hints on my issue.
I'm building a queue data structure that:
has a backup on hard disk at realtime
and can restore the backup
Can respond to massive enqueue/dequeue request
Thank you!
|
Building a high performance and automatically backupped queue
|
Go to your Time Machine drive in Finder and check the permissions of Backups.backupdb. If you see a red stop sign it means that the permissions are wrong.
Open Terminal and check out the permissions
$ sudo -s
$ cd /Volumes/Time\ Machine
$ ls -l
drwxrwx---@ 6 root wheel 204B Jul 9 16:26 Backups.backupdb
Not sure why the group is set to wheel (perhaps previous macOS versions had administrators in the wheel group as well?), but changing it to admin, assuming your user has Administrator privileges fixes the issue
$ chgrp admin Backups.backupdb
This seems to have fixed it for me. No need to change the rwx permissions. I am unsure if there is a Repair Permissions feature for Time Machine drives, as this has been removed from Disk Utility.
|
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
Here is a working solution if you get a pop up warning like Can't connect to a current Time Machine backup disk. when browsing Time Machine and Verify/Repair Disk turns out OK in Disk Utility.
See my answer below.
|
Can't connect to a current Time Machine backup disk [closed]
|
When you run the command
docker run -d --name db -p 8091-8094:8091-8094 -p 11210:11210 -v /home/dockercontent/couchbase:/opt/couchbase/var couchbase
you're pulling a docker image and spawning a docker container.
Please read more about Docker and containerization.
In order to run cbbackup you need to log into your docker container.
Follow these steps:
Retrieve the container-id:
$ docker ps -a
Look for the CONTAINER ID for IMAGE NAME=couchbase
Login to the container using the command:
$ docker exec -it <container-id> bash
Go to the directory : /opt/couchbase/bin using:
$ cd /opt/couchbase/bin
You'll find cbbackup binary in this directory.
|
I've used docker to install couchbase on my ubuntu machine using (https://hub.docker.com/r/couchbase/server/). The docker run query is as follows:
docker run -d --name db -p 8091-8094:8091-8094 -p 11210:11210 -v /home/dockercontent/couchbase:/opt/couchbase/var couchbase
Everything works perfectly fine. My application connects, I'm able to insert/update and query the couchbase. Now, I'm looking to debug a situation wherein the couchbase is on my co-developers machine who also has the same installation i.e., couchbase on docker using the above link. For achieving this, I wanted to run cbbackup on his installation. To achieve this, I run the following command which is a variation of the above link:
bash -c "clear && docker exec -it couch-db sh"
Can anyone please help me with the location of /opt/couchbase/bin in this setup? I believe this is where I can get access to "cbbackup", "cbrestore" and "cbtransfer" which I can then use to backup and restore data from my colleague's machine.
Thanks,
Abhi.
|
Docker couchbase cbbackup/cbtransfer/cbrestore tools
|
Have you entered a retention period?
When not using the incremental backup you have to specify the retention period in order for it to "stick".
Just to be on the safe side, you can't use "0" as this is reserved to incremental.
|
We have set up a test artifactory server.
We tried to edit the pre-defined backup-daily plan by unchecking the Incremental checkbox and clicking Save.
However, when going back to the edit screen, the check box remains checked.
Is there a reason for that?
This also happens when defining a new (custom) back up plan.
The Incremental check box seems to always remain checked.
Here is the system info.
Here are the logs when going through the process of unchecking Incremental checkbox and clicking Save.
2017-04-07 09:50:35,126 [http-nio-8081-exec-9] [INFO ] (o.a.c.CentralConfigServiceImpl:394) - Reloading configuration...
2017-04-07 09:50:35,127 [http-nio-8081-exec-9] [INFO ] (o.a.c.CentralConfigServiceImpl:250) - Saving new configuration in storage...
2017-04-07 09:50:35,154 [http-nio-8081-exec-9] [INFO ] (o.a.c.CentralConfigServiceImpl:254) - New configuration saved.
2017-04-07 09:50:35,156 [http-nio-8081-exec-9] [INFO ] (o.a.s.ArtifactoryApplicationContext:433) - Artifactory application context set to NOT READY by reload
2017-04-07 09:50:36,561 [http-nio-8081-exec-9] [INFO ] (o.a.s.BaseTaskServiceDescriptorHandler:51) - No Replication configured. Replication is disabled.
2017-04-07 09:50:36,584 [http-nio-8081-exec-9] [INFO ] (o.a.s.ArtifactoryApplicationContext:433) - Artifactory application context set to READY by reload
2017-04-07 09:50:36,586 [http-nio-8081-exec-9] [INFO ] (o.a.c.CentralConfigServiceImpl:406) - Configuration reloaded.
|
Artifactory 5.2.0: managing non-incremental backups
|
If the crontab syntax given above is the one you actually use, your cronjob isn't executed three times a day, but rather every minute of 12am and 8am and 4pm every day. Resulting in 60 executions during the given hours. Depending on the script it therefore might seem to run for an hour.
If the script runs 2-3 minutes you should see 2-3 tasks in parallel if you use the ps command during this period of time. You might also want to investigate syslog. There should be a log entry for each command crond is starting every minute.
Add the minute of hour when your cronjob should start to solve this issue (e.g. 12 if you want your cronjob run on the 12th minute of 12am and 8am and 4pm every day):
12 0,8,16 * * * /opt/maintenance/backup-databases.sh
|
i have a small bash-script to backup some MySQL databases. The script dumps out the databases from MySQL using mysqldump and than rsyncs the zipped dumps to another server on the LAN.
When I run the script directly from the command line, the execution time is roughly 2-3 minutes.
I added the same bash script to the crontab of the user root. This cronjob is being executed three times a day. I have the impression, that the execution of the script takes much much longer then (Up to an hour I guess).
Is there any way I can debug what’s going on behind the scenes? I want to find out why the execution takes so much more time.
The crontab-entry looks as follows:
* 0,8,16 * * * /opt/maintenance/backup-databases.sh
|
Script seems to take more time when run as cronjob
|
Why must you use the date embedded within the file name? The last modified date should be the same as the date embedded in the file name as long as the backup has not been modified since it was created.
FORFILES is one of the few Windows utilities that conveniently works with date arithmetic. Type FORFILES /? from the command line to get help on its usage.
forfiles /p "D:\Google Drive\Saves Backup" /m "*.rar" /d -7 /c "cmd /c del @path"
If you have a risk that someone could modify a backup, thus changing the last modified date, then the above will not work. Parsing and comparing dates is a pain in batch. You would be better off using VBScript.
|
So I've got a batch file that backs up a folder to my Google Drive directory, like so:
C:\Program Files\WinRAR\rar.exe a -r "D:\Google Drive\Saves Backup\%DATE%.rar" "D:\Documents\My Games\"
This makes a file called 30-Sep-12.rar (being run today) in the appropriate folder.
However, my question is this: Is there some way to go through said folder (D:\Google Drive\Saves Backup) and delete backups that are more than a week old, as determined by the filename?
|
Batch - Remove backups a week old
|
In Firebird < 2.0, -r will replace your currently database file with the one restored from the backup. In FB >= 2.0, you need to specify -rep for that. Take care to avoid replacing an active database.
|
When I run:
gbak -r
what will it do?
|
What does the switch -r mean for Firebird gbak tool?
|
You are using the file name, not the path for RESTORE. Try something like the following - only specify the path:
db2 restore database gyczpas from "/home/db2inst1/GYCZPAS/PAS_BACKUP" taken at 20170109092932 into gyczpas
|
I try to restore a DB2 database, but it says the return path is not valid.
This is what I tried:
db2 restore database gyczpas from "/home/db2inst1/GYCZPAS/PAS_BACKUP/GYCZPAS.0.db2inst1.NODE0000.CATN0000.20170109092932.001" taken at 20170109092932 into gyczpas
SQL2036N The path for the file or device "/home/db2inst1/GYCZPAS/PAS_BACKUP/GYCZPAS.0.db2inst1.NODE0000.CATN000" is not valid.
I used the same path during RESTORE that I used for the BACKUP command, but it fails. What could be the reason?
DB22 version: v9.7
|
DB2: restore database returns error SQL2036N on Linux
|
You can also exclude /var/log.
If you have some database on the system, it's ok that the size of the backup grows.
|
I've got a backup script that's supposed to backup the whole system, but it keeps increasing in size days after the backup. I've set to do one backup each day in crontab.
TIME=`date +%b-%d-%y`
FILENAME=backup-$TIME.tar.gz
SRCDIR=/
DESDIR=/home/backup
tar -cpzf $DESDIR/$FILENAME --exclude=/home/backup --exclude=/tmp --exclude=/sys --exclude=/dev $SRCDIR
Is there some other directory that is constantly changing that i need to exclude aswell?
Thanks in advance.
|
Tar backup keeps increasing in size
|
You could try using the Try and Catch method by wrapping the (create-7zip $_.FullName $dest) with a try and then catch any errors:
Try{ (create-7zip $_.FullName $dest) }
Catch{ Write-Host $error[0] }
This will Try the function create-7zip and write any the errors that many accrue to the shell.
|
I am still very new and I have for example one script to backup some folders by zipping and copying them to a newly created folder.
Now I want to know if the zip and copy process was successful, by successful i mean if my computer zipped and copied it. I don't want to check the content, so I assume that my script took the right folders and zipped them.
Here is my script :
$backupversion = "1.65"
# declare variables for zip
$folder = "C:\com\services" , "C:\com\www"
$destPath = "C:\com\backup\$backupversion\"
# Create Folder for the zipped services
New-Item -ItemType directory -Path "$destPath"
#Define zip function
function create-7zip{
param([String] $folder,
[String] $destinationFilePath)
write-host $folder $destinationFilePath
[string]$pathToZipExe = "C:\Program Files (x86)\7-Zip\7zG.exe";
[Array]$arguments = "a", "-tzip", "$destinationFilePath", "$folder";
& $pathToZipExe $arguments;
}
Get-ChildItem $folder | ? { $_.PSIsContainer} | % {
write-host $_.BaseName $_.Name;
$dest= [System.String]::Concat($destPath,$_.Name,".zip");
(create-7zip $_.FullName $dest)
}
Now I can either check if in the parentfolder is a newly created folder by time.
Or i can check if there are zip folders in my subfolders I created.
What way would you suggest? I probably just know this ways, but there are a million way to do this.
Whats your idea? The only rule is , that powershell should be used.
thanks in advance
|
How many ways to check if a script was "successful" by using Powershell?
|
It depends a lot more on your operational requirements than anything else.
All three will require shelling out to an external program. libpq doesn't provide those facilities directly; you'll need to invoke the pg_basebackup or pg_dump via execv or similar.
All three have different advantages.
Atomic snapshot based backups are useful if the filesystem supports them, but become useless if you're using tablespaces since you then need a multivolume atomic snapshot - something most systems don't support. They can also be a pain to set up.
pg_dump is simple and produces compact backups, but requires more server resources to run and doesn't support any kind of point-in-time recovery or incremental backup.
pg_basebackup + WAL archiving and PITR is very useful, and has a fairly low resource cost on the server, but is more complex to set up and manage. Proper backup testing is imperative.
I would strongly recommend allowing the user to control the backup method(s) used. Start with pg_dump since you can just invoke it as a simple command line and manage a single file. Use the -Fc mode and pg_restore to restore it where needed. Then explore things like configuring the server for WAL archiving and PITR once you've got the basics going.
|
I'm a new to PostgreSQL and I'm looking to backup the database. I understand that there are 3 methods pg_dump, snapshot and copy and using WAL. Which one do you suggest for full backup of the database? If possible, provide code snippets.
|
Backing up PostgreSQL
|
3
Seems that you are wanting a bash script to run backup on dynamic databases that are created in MySQL. You can add the mysql root user account information in my.cnf in the root directory or within the bash script under the # [ Define Variables ].
you will need to chmod the bash script with
$sudo chmod +x backupmysql.sh
This will allow you to run the script with the following command.
$sudo ./backupmysql.sh
You can name the script whatever you like. In this example, I named it backupmysql.sh.
Here is the bash script:
#!/bin/bash
# [ Define Variables ]
HOST=`hostname -s`
syslogtag=MySQL-Backup
DEST=/var/dba/backup/
DBS="$(mysql -u root -Bse 'show databases' | egrep -v '^Database$|hold$' | grep -v 'performance_schema\|information_schema')"
DATE=$(date +'%F')
#[ Individually dump all databases with date stamps ]
for db in ${DBS[@]};
do
GZ_FILENAME=$HOST-$db-$DATE.sql.gz
mysqldump -u root --quote-names --opt --single-transaction --quick $db > $DEST$HOST-$db-$DATE.sql
ERR=$?
if [ $ERR != 0 ]; then
NOTIFY_MESSAGE="Error: $ERR, while backing up database: $db"
logger -i -t ${syslogtag} "MySQL Database Backup FAILED; Database: $db"
else
NOTIFY_MESSAGE="Successfully backed up database: $db "
logger -i -t ${syslogtag} "MySQL Database Backup Successful; Database: $db"
fi
echo $NOTIFY_MESSAGE
done
If you have large files for backup, you can replace the statement in the bash script for the mysqldump to compress the file using gzip.
mysqldump -u root --quote-names --opt --single-transaction --quick $db | gzip -cf > $DEST$HOST-$db-$DATE.sql.gz
you can use gunzip to uncompress the file.
Share
Improve this answer
Follow
edited May 23, 2018 at 18:13
answered May 22, 2018 at 16:28
Todd HendersonTodd Henderson
7133 bronze badges
Add a comment
|
|
REF http://www.rsnapshot.org/howto/1.2/rsnapshot-HOWTO.en.html
4.3.9. backup_script
I need to backup ALL the mysql databases by dynamically should new ones be created. Is there an ideal way to do this in bash with minimal code?
Would I need to log in to mysql and get all the databases?
|
MySQL Backup Script in Bash
|
If I'm understanding you correctly you want to make a backup of your project? If you've never used version control before, now would be a great time to start! Version control will not only provide you with what you're looking for but many other great features. There's plenty of different SCM's available for you to choose froml; Git, SVN, Mercurial and so on.
Otherwise if all you want is to copy the project to another location, open your eclipse workspace folder (the directory you defined when you first started eclipse) and copy the project directory from there. Or do as MarchingHome suggests.
|
Hey guys simple question here. Whats the best way to back up an android project? I use eclipse. I'm fairly new and not sure what I need to back it up. Do I need just the project or do I need the meta data also? Thanks guys
|
Best way to Backup project
|
BACKUP DATABASE @strDB TO DISK =@BackupFile WITH RETAINDAYS = 10, NAME = N'MyDataBase_DATA-Full Database Backup', STATS = 10
You must define @BackupFile and @strDB as the database name.
All of this is free in Books Online which you can find online.
|
I am looking for a way to backup a SQL Server database with T-SQL. I do no have root access to this server through the console, as my only access comes through SQL Server Management Studio.
Could someone please show me the SQL that I could use to export the raw SQL for my entire database?
|
SQL Server : backup database with T-SQL?
|
5
Just rsync the clone ( including working directory) to another location frequently.
Share
Improve this answer
Follow
answered Jul 25, 2011 at 21:15
manojldsmanojlds
295k6464 gold badges477477 silver badges423423 bronze badges
1
Wouldn't this require tons of disk space?
– user561638
Aug 3, 2011 at 12:55
Add a comment
|
|
I need a backup that included uncommitted modifications because my developers don't like to commit often.
|
What do you all recommend for backing up git workspaces/clones?
|
On a *nix system, use a CRON job. On Windows, it's Scheduled Task... Either will execute a script at a given time on any (or a specific) day.
I'd recommend a better table design:
DROP TABLE IF EXISTS `example`.`stocks`;
CREATE TABLE `example`.`stocks` (
`id` int(10) unsigned NOT NULL auto_increment,
`name` varchar(45) NOT NULL default '',
`stock_value` varchar(40) NOT NULL default '',
`created_date` datetime NOT NULL default CURRENT_TIMESTAMP,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
This way, if you want to see the last 9 values for a given stock name, use:
SELECT s.name,
s.value,
s.created_date
FROM STOCKS s
WHERE s.name = ?
ORDER BY s.created_date DESC
LIMIT 9
|
I have wondered for a while now how to this. If I want to take weekly backups of all or many of my tables that stores values that changes every day in mysql. I also want to execute daily functions with php that update update values in my db.
I was thinking of making a stock investing function. Where I have fictional data as the value for various stocks, a value that changes randomly every day for every stock.
Maybe something like this for the last 9 days of the stock price curve.
CREATE TABLE `stocks` (
`id` int(11) NOT NULL auto_increment,
`name` varchar(40) NOT NULL default '',
`day_1` varchar(40) NOT NULL default '',
`day_2` varchar(40) NOT NULL default '',
`day_3` varchar(40) NOT NULL default '',
`day_4` varchar(40) NOT NULL default '',
`day_5` varchar(40) NOT NULL default '',
`day_6` varchar(40) NOT NULL default '',
`day_7` varchar(40) NOT NULL default '',
`day_8` varchar(40) NOT NULL default '',
`day_9` varchar(40) NOT NULL default '',
if I could execute a php function once a day that made an array of the last 9 days of values. Then just change the day_1 value and use array_push($array, "new_stock_price"); then update the db with the new last_9_days values.
|
how to set up daily php functions or mysql queries
|
You mysqldump command is like this :
/usr/bin/mysqldump --host=HOST --user=USER --password=PASSWORD TABLE --quick --lock-tables --add-drop-table
Looking at the manual of mysqldump, I would say that it will think that TABLE is actually the name of your database (quoting) :
There are three general ways to invoke
mysqldump:
shell> mysqldump [options] db_name [tables]
shell> mysqldump [options] --databases db_name1 [db_name2 db_name3...]
shell> mysqldump [options] --all-databases
Apparently, you have to put the name of the database before the name of the table you want to dump.
So, for instance, something like this might do the trick :
/usr/bin/mysqldump --host=HOST --user=USER --password=PASSWORD DBNAME TABLE --quick --lock-tables --add-drop-table
Hope this helps !
EDIT : I actually made a quick test, using "databasename.tablename", like you did :
mysqldump --user=USER --password=PASSWORD --no-data DBNAME.TABLENAME
I'm getting the same kind of output you have... But I also have an error :
mysqldump: Got error: 1102: Incorrect database name 'DBNAME.TABLENAME' when selecting the database
Which, I'm guessing, is going to the error output, and not the standard output (you are only redirecting that second one to your file)
If I'm using :
mysqldump --user=USER --password=PASSWORD --no-data DBNAME TABLENAME
Everything works OK : no error, and I have the dump.
|
I am trying to write a PHP program to automatically created backups of MySQL tables as sql files on the server:
$backup = "$path/$tablename.sql";
$table = "mydbname.mytablename";
exec(
sprintf(
'/usr/bin/mysqldump --host=%s --user=%s --password=%s %s --quick --lock-tables --add-drop-table > %s',
$host,
$user,
$password,
$table,
$backup
)
);
All that I get in the resulting .sql file is this:
-- MySQL dump 10.10
--
-- Host: localhost Database: mydbname.mytablename
-- ------------------------------------------------------
-- Server version 5.0.27
/*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */;
/*!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS */;
/*!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION */;
/*!40101 SET NAMES utf8 */;
/*!40103 SET @OLD_TIME_ZONE=@@TIME_ZONE */;
/*!40103 SET TIME_ZONE='+00:00' */;
/*!40014 SET @OLD_UNIQUE_CHECKS=@@UNIQUE_CHECKS, UNIQUE_CHECKS=0 */;
/*!40014 SET @OLD_FOREIGN_KEY_CHECKS=@@FOREIGN_KEY_CHECKS, FOREIGN_KEY_CHECKS=0 */;
/*!40101 SET @OLD_SQL_MODE=@@SQL_MODE, SQL_MODE='NO_AUTO_VALUE_ON_ZERO' */;
/*!40111 SET @OLD_SQL_NOTES=@@SQL_NOTES, SQL_NOTES=0 */;
It seems fishy that the results list Database:mydbname.mytablename.
I don't know where to start looking. My host, in helping my set up the program initially, said I would need to deactivate safe_mode (was already done) and give access to the binary (I don't know what this means).
Since the host takes two days to reply every time I ask a question, I'd like to resolve this myself if possible.
|
MySQLdump results in lots of commented lines, no real content
|
5
mysqldump is sufficient
It will generate the SQL code necessary to rebuild your database and as the relationships are not special data (just logical coincidences between tables) it's enough to backup a database. Even by using mysqldump without the --opt param it will add indexes definitions so the contraints will remain
Share
Improve this answer
Follow
answered Jun 18, 2009 at 8:03
victor hugovictor hugo
35.6k1212 gold badges7070 silver badges7979 bronze badges
Add a comment
|
|
I would like to know how to backup my data from 2 separate tables (CATEGORIES and SUBCATEGORIES, where SUBCATEGORIES belong to a CATEGORY) in such a way that I can restore the relationship at a later time. I am not sure if mysqldump --opt db_name would suffice.
Example:
Categories:
| ID | name
-----------
| 1 | Audio
| 9 | Video
Subcategories:
| ID | category_id | name
-------------------------
| 1 | 1 | Guitar
| 2 | 1 | Piano
| 3 | 9 | Video Camera
Thanks
|
mysql backup preserving relationships
|
I had the same issue on my SQL Server 2012 version, the error was during the dB backup using Ola's scripts, as mentioned above the issue is with the output file, I changed the location and the output file from the SQL Job and reran the job successfully (refer the attached screenshot for reference.
|
I am using ola hallengren script for maintenance solution. When I run just the Database backup job for user database I get the following error. Unable to start execution of step 1 (reason: Variable SQLLOGDIR not found). The step failed.
I have checked the directory permissions and there is no issue there. The script creates the job with no problem. I get error message when I try to run the job.
|
variable for SQLLOGDIR not found
|
Usually, in the mysqldump backup script, the views are first created as tables and then are then dropped at the bottom of the script as each view is being created.
Sometimes there is an error in this process because when a view is created there is a user used as DEFINER. This statement may fail because this user might not exist in the database.
Please verify that the view drop/create script exists at the end, write the error that you are getting (if you are getting) and run the import using the -v option for more logging.
|
I have a db in MariaDB 10.1.25 and in this, I have many tables and 20 views.
When I try to backup my db using mysqldump, it works fine for tables but in view definitions, it fails to create a create statement like it does with tables.
The code generated is this:
--
-- Temporary table structure for view `qry_clientes`
--
DROP TABLE IF EXISTS `qry_clientes`;
/*!50001 DROP VIEW IF EXISTS `qry_clientes`*/;
SET @saved_cs_client = @@character_set_client;
SET character_set_client = utf8;
/*!50001 CREATE TABLE `qry_clientes` (
`Id` tinyint NOT NULL,
`Cliente` tinyint NOT NULL,
`Direccion` tinyint NOT NULL,
`Ciudad` tinyint NOT NULL,
`Fono` tinyint NOT NULL,
`Fax` tinyint NOT NULL,
`Email` tinyint NOT NULL,
`Ruc` tinyint NOT NULL,
`tipo` tinyint NOT NULL
) ENGINE=MyISAM */;
SET character_set_client = @saved_cs_client;
and in this there are no view definitions. I have all the privilegies grandted
|
View definitions in MariaDB are not create with mysqldump
|
2
It looks like you have a few options with Windows VMs, including something called the Windows Backup Agent. This agent allows you to configure redundancy and backup schedule, along with some other properties that may improve your performance.
Another helpful resource I found was this page which just talks about planning your back approach, along with next steps after you work through this material.
Share
Improve this answer
Follow
answered Mar 27, 2017 at 14:12
ChadChad
4911 silver badge77 bronze badges
Add a comment
|
|
I have an Azure Windows VM running a few SQL server databases. Azure premium offers automated backup, which is great. I am however not able to change the frequency of the automated backups, which now occur every 2 hours. These 2 hour scheduled backups result in performance issues when the server is running important SQL Server jobs.
Is there any way to alter the frequency other than creating manual backup jobs with the job agent? Within the Azure portal I can only change things like the backup retention, not the frequency.
Thanks in advance!
|
Azure Automated Backup Frequency
|
You are trying wrong command
mysqldump -u "username" -p"password" "databasename" > "somename.sql"
or it
mysqldump -u "username" -p "databasename" > "somename.sql"
and type password when prompt.
|
I'm trying to take a backup of a MySQL database using the following command:
mysql -u "username" -p "databasename" > "somename.sql"
After I enter the password, it doesn't show any output/error. It doesn't show the terminal prompt. No backup file gets created.
I've used the same command successfully before. But I have no clue why it isn't working now.
Any ideas?
I work on Ubuntu 14.04 LTS.
|
MySQL backup command not working
|
I guess it is the pipe that is giving this problem. You might be getting the exit status of
"gzip -c > /usr/local/bin/database.gzip"
You might have to split the dump and gzip part into two.
use
`mysqldump -u root -ppassword database1 > ./dump.txt`;
if ($? == 0){
`gzip -9 ./dump.txt`;
}
else{
die "errored";
}
|
When I perform following in perl, I allways get the error code "0". Even if for example the mysql database does not exist, or the password is wrong, etc.
@args = ("mysqldump -u root -ppassword database1 | gzip -c > /usr/local/bin/database.gzip");
system(@args) == 0
or die "Command failed: @args \nError Code: $? \n";
My goal is to catch any error of the mysqldump command, so I can make sure if the backup was successfuly.
|
mysqldump in perl - return value is allways 0
|
One option is to import your repositories into a Mercurial hosting service like Bitbucket and then use the built-in push feature of Mercurial to periodically copy your local changes to the remote clone. You can find instructions on how to import an existing repository into Bitbucket here.
|
I am using Mercurial with tortoiseHg, and I have few repositories around my computer (because i work with different languages at the same time).
I would like to have a backup of all my repositories that I would trigger manually (ideally with an icon on my desktop).
I am open to many backup solution, like using dropbox, or using another computer on the same network or both solutions at the same time.
Which is the best solution and how can I implement it?
I am using windows 7
|
Manual backup for mercurial repository
|
5
You can create a CRON job which launches a shell code.
For instance if you want to inject a backup database:
mysql -u user -ppassword database < yourbackupfile.sql
Now if you want to create a dump:
mysqldump -u user -ppassword database > yourbackupfile.sql
Share
Improve this answer
Follow
answered Jul 25, 2012 at 10:28
Geoffrey BrierGeoffrey Brier
78966 silver badges1818 bronze badges
3
The name of the backup file will always change as the date is built into the name so i'm not sure if this will work
– user1482082
Jul 25, 2012 at 10:36
Scan the directory you are copying them to and select the one with that was created last?
– 1321941
Jul 25, 2012 at 10:40
1
If u need dynamic filename set backup_date '+%m-%d-%y_%H:%M:%S'.sql
– Learner
Jul 25, 2012 at 11:11
Add a comment
|
|
I've been creating scheduled MySQL backups and then sending them via FTP to another test server using PHP. I need to know if there is a program that I can run to restore the files on my other server but using a scheduled restore?
Thanks
|
Is there a way of restoring a MySQL database on a time schedule
|
The error says that there is no database named vaioin your restore file.
There is not much to help here. Make sure you have the correct restore file
|
I'm currently trying to restore my database.
The step I follow is the executing the query
Restore Database vaio
from disk = 'C:\Users\DB101209123928_Diff_20120312.bak'
with replace;
But I'm getting the following error.
Msg 3154, Level 16, State 4, Line 1
The backup set holds a backup of a database other than the existing 'vaio' database.
Msg 3013, Level 16, State 1, Line 1
RESTORE DATABASE is terminating abnormally.
|
Can't restore SQL Server 2008 from backup
|
There are more open source solutions that I could name, but for this application my choice would be rsync and a cron job.
Here is a good overview of some open source options (some are more desktop oriented).
EDIT
The nice thing about rsync, is it can directly sync the folder storing the repo. The downside of this approach is it could sync a corrupt repository. Doing a dump and storing incremental backups would protect you from this.
Despite what I recommended above, my personal preference would be to avoid subversion altogether. With a DVCS like git or mercurial, ever developer's has a full working copy of the repository that can be used to restore the copy on your shared server.
|
CentOS 5.3
subversion 1.4.2
I forgot to add. Currently the total repository size is about 5GB, but that will grow over time.
We have our source code and documents on our internal server running CentOS 5.3 and we are using subversion 1.4.2.
We are looking for a backup strategy. We want to perform daily backups. We have about 30 repositories to backup of differing sizes.
I could create a script file and recursively backup using svnadmin dump.
However, I am looking for an automated backup system that will run nightly say 12am each day.
Does anyone know of any backup systems that are open-source? I think my company is reluctant to pay for any system.
Many thanks for any advice,
|
backing up subversion repositories
|
You can also go surf to localhost/phpmyadmin and go to 'export' and select the databases you want to export.
|
How can I backup my MySQL's databases? I'm using Windows Vista and MySQL 5.1.
I have found the folder "C:\Users\All Users\MySQL\MySQL Server 5.1\data" with all my database files and copy them, but how can I restore them if I need?
Thank you.
|
How to backup my MySQL's databases on Windows Vista?
|
4
You'll need this:
http://www.codeplex.com/ExpressMaint
Then you can create a .cmd file to run it and schedule it using Scheduled Tasks. I can't give you an exact command line because your setup will be different from mine, but the docs are here:
http://www.sqldbatips.com/showarticle.asp?ID=27
http://www.sqldbatips.com/showarticle.asp?ID=29
Share
Improve this answer
Follow
answered Oct 9, 2009 at 11:08
Mark BellMark Bell
29.3k2626 gold badges119119 silver badges147147 bronze badges
Add a comment
|
|
I need to backup SQL Server database.
Is it possible to do this automatically without human intervention at regular intervals? If so yes then please suggest me how to do it and I'm using SQL Server 2005 Express Edition.
|
Automatically backup SQL Server database
|
One way to do something like this would be to use the day of the week in the filename:
backup-mon.tgz
backup-tue.tgz
etc.
Then, when you backup, you would delete or overwrite the backup file for the current day of the week.
(Of course, this way you only get the latest 7 files, but it's a pretty simple method)
|
I have created a shell script to backup my webfiles + database dump, put it into a tar archive and FTP it offsite. Id like to run it X times per week however I only want to keep the latest 10 backups on the FTP site.
How can I do this best? Should I be doing this work on the shell script side, or is there an FTP command to check last modified and admin things that way?
Any advice would be appreciated.
Thanks,
|
FTP - Only want to keep latest 10 files - delete LRU
|
The files ending with ~ are swap files used by vim while editing files. You can try setting the backupdir and directory variables
set backupdir=~/tmp/vim//,~/tmp//,.//,/var/tmp//,/tmp//
set directory=~/tmp/vim//,~/tmp//,.//,/var/tmp//,/tmp//
|
Problem 1: my Vim makes backups with the extension ~ to my root
I have the following line in my .vimrc
set backup backupdir=~/tmp/vim//,~/tmp//,.//,/var/tmp//,/tmp//$
However, I cannot see a root directory in the line.
Why does my Vim make backups of my shell scripts with the extension ~ to my root?
Problem 2: my Zsh run my shell scripts at login which I have in my PATH. For instance, my "replaceUp" shell-script started at my root at login. I keep it at ~/bin/shells/apps by default.
Why does Zsh run shell scripts which are in my PATH at login?
|
Unable to stop the creation of backups to root by Vim/Emacs
|
3
For active shop
Configuration::updateValue('PS_SHOP_ENABLE', '1');
For maintenance mode
Configuration::updateValue('PS_SHOP_ENABLE', '0');
Share
Improve this answer
Follow
answered May 8, 2021 at 14:14
Fran CerezoFran Cerezo
94833 gold badges88 silver badges2020 bronze badges
Add a comment
|
|
I'm writing a script that would back-up my PrestaShop instance installed on my own server. I'm using Prestashop 1.7.7.4.
I suppose it is recommended to put your shop(s) in "Maintenance mode" during database dumping, to make sure nobody interacts with it. However, the only method of enabling the "Maintenance mode" I can find is using the administration panel, which of course requires manual intervention. It makes automatic backups impossible.
Is there any established way to enable "Maintenance mode" using a script/cli/api and not the administration panel?
|
Enabling "Maintenance mode" in PrestaShop 1.7 using a script and not admin panel?
|
The issue is that your backup database sample to disk='...' statement by default APPENDS the new backup to the backup device (backup file). As the result, you then have multiple backups stored in the backup file.
When doing restore, you are restoring the first backup.
To solve the problem, you can specify to override the content of the backup file by using the "WITH INIT" parameter:
BACKUP DATABASE sample to DISK='D:\Backup\sample.bak' WITH INIT;
More docs can be found eg. here
|
I'm trying to backup and restore a database in Sql Server 2014.
The initial backup restore works.
But when I make some changes to the database and repeat the backup/reload procedure
I get the data of the first backup losing the most recent changes.
Below is a script that illustrates the issue I'm facing
CREATE DATABASE sample;
CREATE TABLE list (
id INT,
name VARCHAR(50)
);
--first record is inserted
BACKUP DATABASE sample to DISK='D:\Backup\sample.bak';
truncate table list;
GO
USE master;
GO
ALTER DATABASE sample
SET SINGLE_USER
WITH ROLLBACK IMMEDIATE;
GO
RESTORE DATABASE sample FROM DISK='D:\Backup\sample.bak' with REPLACE;
GO
ALTER DATABASE sample
SET MULTI_USER;
GO
--restored database contains one record
use sample;
select * from list;
--second record is inserted
insert into list values(2,'item_2');
select * from list;
BACKUP DATABASE sample to DISK='D:\Backup\sample.bak';
GO
USE master;
GO
ALTER DATABASE sample
SET SINGLE_USER
WITH ROLLBACK IMMEDIATE;
GO
RESTORE DATABASE sample FROM DISK='D:\Backup\sample.bak' with REPLACE;
GO
ALTER DATABASE sample
SET MULTI_USER;
GO
--restored database STILL contains one record
use sample;
select * from list;
|
SQL Server The initial backup restore works, the rest don't
|
4
i could not find why its happening but to solve it, i have created another cron job to delete the backups that deletes if its past 2 days.
0 8 * * tue,thu,sat find /var/opt/gitlab/backups/15* -mtime +1 -type f
-delete
Share
Improve this answer
Follow
answered Apr 25, 2018 at 20:40
user_01_02user_01_02
74322 gold badges1616 silver badges3131 bronze badges
2
Thanks! this helped a lot!. I have to use -name in order to the correct permissions to do wild cards. Also, 15* no longer works since we are up to epoc 16* now. (this below deletes any tar older than 120 days) 0 18 * * 5 find /var/opt/gitlab/backups/ -mtime +120 -type f -name "*gitlab_backup.tar" -delete
– SpiRail
Feb 20, 2021 at 11:35
....This answer helped me find the directory of Gitlab's backups. Appreciate it!
– Eliezer Berlin
Nov 16, 2021 at 15:59
Add a comment
|
|
We are using a AWS ec2-instance for gitlab with omnibus installation, Recently gitlab has not been deleting the backup files and disk is filling up. I am not sure which log i should be seeing for this issue.
When i do sudo gitlab-rake gitlab:backup:create --trace
so there is no error message, what could be the reason for not deleting the old backups? Please point me in the right direction.
Deleting old backups ... done. (0 removed)
my backup configuration:
### Backup Settings
###! Docs: https://docs.gitlab.com/omnibus/settings/backups.html
gitlab_rails['manage_backup_path'] = true
gitlab_rails['backup_path'] = "/var/opt/gitlab/backups"
###! Docs: https://docs.gitlab.com/ce/raketasks/backup_restore.html#backup-archive-permissions
gitlab_rails['backup_archive_permissions'] = 0644
# gitlab_rails['backup_pg_schema'] = 'public'
###! The duration in seconds to keep backups before they are allowed to be deleted
gitlab_rails['backup_keep_time'] = 604800
Gitlab -version
gitlab-ce 10.2.2
|
Gitlab not deleting backups
|
2
You can use pg_dump to export the data just from the non-user tables in the development environment and pg_restore to bring that into prod.
The -t switch will let you pick specific tables.
pg_dump -d <database_name> -t <table_name>
https://www.postgresql.org/docs/current/static/app-pgdump.html
Share
Improve this answer
Follow
answered Nov 13, 2017 at 18:24
Neil AndersonNeil Anderson
1,2751212 silver badges1919 bronze badges
Add a comment
|
|
I have a site that uses PostgreSQL. All content that I provide in my site is created at a development environment (this happens because it's webcrawler content). The only information created at the production environment is information about the users.
I need to find a good way to update data stored at production. May I restore to production only the tables updated at development environment and PostgreSQL will update this records at production or the best way would be to backup the users information at production, insert them at development and restore the whole database at production?
Thank you
|
Best way to make PostgreSQL backups
|
For a start, to loop through all directories a fixed level deep, use this:
for dir in /home/customers/*/*/*/
A pattern ending in a slash / will only match directories.
Note that $dir is a lowercase variable name, don't use uppercase ones as they may clash with shell internal/environment variables.
Next, your conditions are a bit broken - you don't need to use a [[ test here:
if ! find "$dir" -maxdepth 1 -type d ! -mtime -36 | grep -q .
If anything is found, find will print it and grep will quietly match anything, so the pipeline will exit successfully. The ! at the start negates the condition, so the if branch will only be taken when this doesn't happen, i.e. when nothing is found. -name '*' is redundant.
You can do something similar with the second /0, removing the /1 and /2 and using /3 to test for any output. I guess the /4 part is redundant too.
|
I'm writing a script to check if there actually is a directory that has content and a normal size, and see if there is a directory older then 36 hours, if not it should alert me.
However I'm having trouble using the directories as variable.
When I execute the script it returns: ./test.sh: line 5: 1: No such file or directory.
I tried ALLDIR=$(ls /home/customers/*/ as well but returned the same error.
What am I doing wrong? Below is the script.
Thanks a lot in advance!!
#!/bin/bash
ALLDIR=$(find * /home/customers/*/ -maxdepth 2 -mindepth 2)
for DIR in ${ALLDIR}
do
if [[ $(find "$DIR" -maxdepth 1 -type d -name '*' ! -mtime -36 | wc -l = <1 ) ]]; then
mail -s "No back-ups found today at $DIR! Please check the issue!" [email protected]
exit 1
fi
done
for DIR in ${ALLDIR}
do
if [[ $(find "$DIR" -mindepth 1 -maxdepth 1 -type d -exec du -ks {} + | awk '$1 <= 50' | cut -f 2- ) ]]; then
mail -s "Backup directory size is too small for $DIR, please check the issue!" [email protected]
exit 1
fi
done
|
Bash: Use directory as variable
|
Ingress cost is 0 for any amount of data, as mentioned in network pricing on this page. The PUT/POST operations though are charged at $0.10 per 10,000 operations as per this page. In short for ingress, data is free, operations are not.
|
Currently we are using Amazon AWS S3 as a backup solution for our servers. Amazon AWS clearly states: "All data transfer in - $0.000 per GB". With Google Cloud Storage I can not find a clear answer.
From what I can find the egress and interconnect data transfer is defined. Interconnect is of course not relevant here. Egress means for as far as my english and the dictionary goes meaning outgoing transfer.
So how is the incoming data measured or are only the number of POST / PUT operation's calculated?
Hope that anyone who has been using Google for a while can elaborate.
|
Does google charge for data transfered into Google Cloud Storage
|
Problem
The function preg_quote() escapes regex special characters, including ( and ), so your subject string doesn't contain
define('DB_NAME', 'thedbname');
Instead, it contains
define\('DB_NAME', 'thedbname'\);
and your regex fails.
Solution
Just remove the preg_quote() from your code, like this:
if (preg_match_all("/define\('DB_NAME', '(.*?)'\)/", $content, $result)) {
print_r($result);
} else { // note: I added braces; it's better to use them always.
print "nothing found\n";
}
This works correctly and outputs:
Array
(
[0] => Array
(
[0] => define('DB_NAME', 'thedbname')
)
[1] => Array
(
[0] => thedbname
)
)
Demo.
|
I want to create a script to backup all my Wordpress installations without having to mark all the directories.
I want to test for the existence of the file wp-config.php and create a database backup with the information in this file.
Here my script which always outputs "nothing found":
$content = file_get_contents($sub_dir . '/wp-config.php');
if (preg_match_all("/define\('DB_NAME', '(.*?)'\)/", preg_quote($content, '/'), $result)) {
print_r($result);
}
else
print "nothing found\n";
I want to do the same parsing with DB_USER, DB_PASSWORD, and DB_HOST.
Here's an example of the content in this config file:
../...
// ** Réglages MySQL - Votre hébergeur doit vous fournir ces informations. ** //
/** Nom de la base de données de WordPress. */
define('DB_NAME', 'thedbname');
/** Utilisateur de la base de données MySQL. */
define('DB_USER', 'thedbuser');
/** Mot de passe de la base de données MySQL. */
define('DB_PASSWORD', 'thedbpassword');
/** Adresse de l'hébergement MySQL. */
define('DB_HOST', 'thedbhost');
../...
|
Regex to extract information from wp-config file
|
The ~-suffixed file is a backup copy of the file, with the state it had before you started editing.
You can disable such automatic backing up.
You can also control various aspects of such backing up, including where backup files are to be stored -- see the Emacs manual, node Backup and its subnodes.
You can also remove all of the backup files in a directory at once, by visiting the directory using Dired: C-x d. After visiting it, use ~ to mark all backup files for deletion, then x to delete them. See the Emacs manual, node Flagging Many Files.
|
Sorry, I have a problem when I try to learn emacs in terminal, when I try to edit 1 file (file.txt) and I try to save this file with C-x C-s. I get 2 files with the same name (file.txt and file. txt~). how to handle duplicate file after save file with emacs on mac osx?
|
how to handle duplicate file(.txt and .txt~) after save file with emacs
|
I discovered that it is possible to rollback my DB instance to a certain point-in-time
That's the approximate net effect, but your description is not precisely correct.
What's possible with Point-in-time Recovery is that you can create a new instance, with the data as it existed on your current instance, at the specified point in time.
Your current instance is not modified by this operation, so you're not actually rolling anything back.
Point-in-time allows you to specify any time >= the time of the first retained backup, and <= the "latest restorable time," which is approximately 5 minutes ago.
The binlogs are not "merged" when you specify an arbitrary time -- that's not how binary logging and restoration work. The new instance is created with the latest snapshot that occurred prior to the specified time, and then the binary logs from that point in time up until the time you specified are applied, consecutively, to the instance, in order to roll it forward from the snapshot to the desired point in time. Binlog entries after the specified point in time are simply not executed.
The end result is a new instance that represents the data on your instance as it existed at the specified point in time.
If you then want to actually replace the old RDS instance with the new one in your stack, you change the DB instance identifier on old (to something different) and new (to match the prior value from old) and the DNS entry is automatically updated so that your application can find the new instance at the old hostname.
Yes, 35 days is the longest retention period for automated backups.
You can modify the backup retention period; valid values are 0 (for no backup retention) to a maximum of 35 days.
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithAutomatedBackups.html
|
I discovered that it is possible to rollback my DB instance to a certain point-in-time, via binary logging.
I can rollback to 5 minutes ago, but how do I see the previous PiT? Is this 10 minutes ago? What would happen if I select 7 minutes ago, would the binary logs of the two closest PiT merge?
A secondary question, is 35 days the longest retention period for automated backups? The list does not go further when modifying my DB instance.
|
Available Point-In-Time recovery points for AWS RDS instances
|
Some services, like the experimental mysql service on Bluemix are not able to be accessed from outside of Bluemix. You can only access them from within a Bluemix cloud foundry application. If you are using this service , you will need to deploy a phpmyadmin application and bind it to that database to perform management operations.
If you are using other mysql services like cleardb, see Jeff's answer
|
I am using IBM Bluemix MySQL database as a DB server, but I don't know how to take backup from it.
In the past I used to do that with CF tunneling option, but the new CF tool doesn't support CF tunnel.
|
How to take backup of MySQL database in Bluemix?
|
This may not be the most efficient way but it seems to work as a starting point.
Sub DeleteBackups()
Dim fso As Object
Dim fcount As Object
Dim collection As New collection
Dim obj As Variant
Dim i As Long
Set fso = CreateObject("Scripting.FileSystemObject")
'add each file to a collection
For Each fcount In fso.GetFolder(ThisWorkbook.Path & "\" & "excel_backups" & "\").Files
collection.Add fcount
Next fcount
'sort the collection descending using the CreatedDate
Set collection = SortCollectionDesc(collection)
'kill items from index 6 onwards
For i = 6 To collection.Count
Kill collection(i)
Next i
End Sub
Function SortCollectionDesc(collection As collection)
'Sort collection descending by datecreated using standard bubble sort
Dim coll As New collection
Set coll = collection
Dim i As Long, j As Long
Dim vTemp As Object
'Two loops to bubble sort
For i = 1 To coll.Count - 1
For j = i + 1 To coll.Count
If coll(i).datecreated < coll(j).datecreated Then
'store the lesser item
Set vTemp = coll(j)
'remove the lesser item
coll.Remove j
're-add the lesser item before the greater Item
coll.Add Item:=vTemp, before:=i
Set vTemp = Nothing
End If
Next j
Next i
Set SortCollectionDesc = coll
End Function
|
I have a macro in excel that runs before save and creates a backup of an excel table with the actual date in its name.
These backups started to take too much space, so I have inserted another macro that deletes backups older than 14 days. The problem is that sometimes we don't save new copies for 2 weeks or months, so I need a macro that will leave only the 5 newest backups and delete the rest.
The current macro used:
'======================================================================================
'delete old backup
Set fso = CreateObject("Scripting.FileSystemObject")
For Each fcount In fso.GetFolder(ThisWorkbook.Path & "\" & "excel_backups" & "\").Files
If DateDiff("d", fcount.DateCreated, Now()) > 14 Then
Kill fcount
End If
Next fcount
'======================================================================================
backups are saved in this format:
ThisWorkbook.Path & "\excel_backups" & "\backup_" & Format(Date, "yyyy.mm.dd") & ".h" & Hour(Now) & "_" & ActiveWorkbook.name
so a backup looks like this: backup_2014.12.18.h14_[filename].xlsm
My question is: can this be modified somehow to delete only the oldest ones, and leave the last 5 newest of them? I have no idea how to start writing that.
Thank you for your time.
|
Excel VBA - leave 5 newest backups and delete the rest
|
You can just specify the table names along with mysqldump something as
mysqldump -u uname -pPSSW dbname table1 table2 table3 tableN > backup.sql
|
I am using the following cron job for backup my database daily.
/usr/bin/mysqldump -u UNAME -p PSSW databasename > /home/mysite/stock/backup.sql
But I want to backup only some of the tables not whole database. Is it possible with cron job?
|
cron job for backup only selected tables not whole database in mysql
|
Use echo to store your command in the crontab file from the command line
$ echo "1 4 * * * /bin/sh /share/CACHEDEV1_DATA/your-backup-folder/backup.sh" >> /etc/config/crontab
This command will run backup.sh 4 minutes past 1 AM.
To make the crontab persistent during reboot, you have to execute this command
$ crontab /etc/config/crontab
Please note that you cannot save the script in /etc/, or /bin/ or some other directory outside of your HDD directories. In other words, always save your script in /share/CACHEDEV1_DATA/your-backup-folder. If you don’t, the script will be deleted upon reboot.
Restart crontab
$ /etc/init.d/crond.sh restart
Set correct permissions
chmod +x /share/CACHEDEV1_DATA/your-backup-folder/backup.sh
Wait for the cron to run, and see if it works
For the full guide, please visit: https://www.en0ch.se/qnap-and-cron/
|
i have a problem, executing my script by crontab on a qnap nas.
it is very confusing, because other test scripts work AND executing this script manually works, too.
here is the script:
#!/bin/sh
[[ ! -d /mnt/backup-cr/daily.0 ]] && mount -t nfs -o nolock 192.168.178.2:/volume1/backup-cr /mnt/backup-cr
#1
[[ -d /mnt/backup-cr/daily.7 ]] && rm -rf /mnt/backup-cr/daily.7
#2
[[ -d /mnt/backup-cr/daily.6 ]] && mv /mnt/backup-cr/daily.6 /mnt/backup-cr/daily.7
[[ -d /mnt/backup-cr/daily.5 ]] && mv /mnt/backup-cr/daily.5 /mnt/backup-cr/daily.6
[[ -d /mnt/backup-cr/daily.4 ]] && mv /mnt/backup-cr/daily.4 /mnt/backup-cr/daily.5
[[ -d /mnt/backup-cr/daily.3 ]] && mv /mnt/backup-cr/daily.3 /mnt/backup-cr/daily.4
[[ -d /mnt/backup-cr/daily.2 ]] && mv /mnt/backup-cr/daily.2 /mnt/backup-cr/daily.3
[[ -d /mnt/backup-cr/daily.1 ]] && mv /mnt/backup-cr/daily.1 /mnt/backup-cr/daily.2
#3
[[ -d /mnt/backup-cr/daily.0 ]] && cp -al /mnt/backup-cr/daily.0 /mnt/backup-cr/daily.1
#4
bakdate=$(date +%Y%m%d%H%M)
/usr/bin/rsync -av \
--stats \
--delete \
--human-readable \
--log-file=/mnt/backup-cr/logs/rsync-cr.$bakdate.log \
/share/cr/ \
/mnt/backup-cr/daily.0 \
MAILFILE=rsync-cr.$bakdate.log.tmp
echo "Subject: rsync-log for cr from srv" > $MAILFILE
echo "To: [email protected]" >> $MAILFILE
echo "From: [email protected]" >> $MAILFILE
echo "" >> $MAILFILE
/usr/bin/tail -13 /mnt/backup-cr/logs/rsync-cr.$bakdate.log >> $MAILFILE
echo "" >> $MAILFILE
echo "" >> $MAILFILE
cat $MAILFILE | ssmtp [email protected]
rm $MAILFILE
And here is my crontab entry:
15 0 * * * /share/CACHEDEV1_DATA/.scripts/backup.sh
The script has the executable-flag, and as I said other scripts within the same folder works.
Does someone has an idea? Because if this works manually on QNAP and also works in crontab on another UBUNTU server, then I think I am getting dumb and paranoid :-)
|
QNAP 4.1.0 & Using own backup script with crontab
|
4
Yes. The basic form is:
RESTORE DATABASE <dbname> FROM DISK='<path to bak>';
See http://technet.microsoft.com/en-us/library/ms186858.aspx for details.
Share
Improve this answer
Follow
answered Nov 4, 2013 at 12:18
Sebastian MeineSebastian Meine
11.5k11 gold badge3131 silver badges4343 bronze badges
Add a comment
|
|
Is it possible to restore database from the latest .bak file using sql script?
|
Restore Database from latest .bak
|
3
The concern would be a potentially inconsistent state if any database operation occurred while the backup image was being created. Some of the database transactions may not have been flushed to disk yet, still residing in memory.
Since mysqldump can use the internal state of the database, I'd recommend using a cron job to regularly perform a mysqldump, and then backing up the output of the mysqldump with Cloud Backups.
Something like the following for the cron job:
#!/bin/sh
mysqldump -h DB_HOST -u DB_USER -p'DB_PASSWORD' \
DB_NAME > YOUR_WEB_ROOT/db_backup.sql gzip -f PATH_TO_BACKUPS/db_backup.sql
References:
http://www.rackspace.com/cloud/backup/
http://www.rackspace.com/knowledge_center/article/rackspace-cloud-backup-backing-up-databases
Share
Improve this answer
Follow
edited Oct 4, 2013 at 16:00
answered Oct 4, 2013 at 15:11
JRPJRP
97966 silver badges99 bronze badges
1
1
Furthermore, if you had to restore the DB you want to be able to do so without having to restore a snapshot or image of the entire server.
– Chris Rasco
Oct 4, 2013 at 15:56
Add a comment
|
|
Rather than running mysqldump (e.g.) every morning, would it be fine to just rely on the daily server image backups that Rackspace Cloud Servers do? Or, is there a future headache that I'm not seeing?
|
Is relying on Rackspace Cloud Server backup images an okay way to handle mysql backups?
|
Here's what I would do. Let's say the app is called "HPlan", so you have a \app\HPlan folder on your Essbase server. Within the HPlan folder there are several subfolders for each database, such as HPlan, Capex, Wrkforce, and so on.
Stop the entire app using EAS
Move the entire contents of the app\HPlan folder to a temporary folder
Place the backup contents into the app\HPlan folder so that it looks the same as before. So if before you had \app\HPlan\HPlan\HPlan.otl and \app\HPlan\HPlan\HPlan.ind and so on, you would see those again.
Start the app and see if it works.
Try to login via Planning and see that it works.
This will only work assuming the app was never deleted from Hyperion Planning itself. If you recreated or created a new app in Planning itself then this won't work. In that case you would need to roll the database (SQL Server, Oracle, etc) back to a previous point so that you still had all of the metadata that's in Planning.
Good luck -- if this helps please mark it as helpful so that more people are encouraged to answer Essbase questions on Stack Overflow!
|
User has accidentally deleted Essbase application from the EAS Console.
An application with the same name was created.
We then used Hyperion Planning to recreate the database of the new application.
We have file system backup of all the directories of the deleted application.
How can we get all the data of the old application into the new one?
Thank you for help.
|
Recover Essbase data
|
Try mysqldump with a simple shell script. It can be extended to dump tables. It now only dumps databases.
#!/bin/bash
USER="username"
PASSWORD="password"
OUTPUTDIR="./"
MYSQLDUMP=`which mysqldump`
MYSQL=`which mysql`
# get a list of databases
databases=`$MYSQL --user=$USER --password=$PASSWORD \
-e "SHOW DATABASES;" | tr -d "| " | grep -v Database`
# dump each database in turn
for db in $databases; do
# skip internal databases
if [ "$db" == "information_schema" ] || [ "$db" == "mysql" ]; then
continue
fi
# dump whole database
echo "$db"
$MYSQLDUMP --force --opt --user=$USER --password=$PASSWORD \
--databases $db > "$OUTPUTDIR/$db.sql.bak"
# get a list of tables inside database
tables=`$MYSQL --user=$USER --password=$PASSWORD \
-e "USE $db;SHOW TABLES;" | tr -d "| " | grep -v Tables_in_$db`
# dump tables
for tbl in $tables; do
echo "$db.$tbl"
$MYSQLDUMP --force --opt --user=$USER --password=$PASSWORD \
$db $tbl> "$OUTPUTDIR/$db.$tbl.sql.bak"
done
done
|
I'm working on Mysql data backup and restore system but stuck that how to go with it.
An Option which i may think of
Table Wise Backup Method
Creating A Directory With Dateandtime and directoy will have one definition text file with all databases and its table names plus seperate file for each table contaning table structure and table INSERTS.
Table Wise Restore Method
Reading directory and definition file to Sort backups with respect to dates and table names and user can select either all tables or one specific table to restore.
I'll be using PHP for this purpose as i have to upload these backup files automatically on different servers.
Questions
1- Is above backup and restore method is valid?
2- Is there a way by which i can write single file for each Database but still have some way to restore only selected or all tables in database?
3- What are important points i must to keep in mind for such applications?
Please let me know if anything Ambiguous?
|
MySql Table wise Backup and Restore
|
3
git-annex does what you want. It uses git-plumbing for version control but without having a copy in the index and in the repository.
Share
Improve this answer
Follow
answered Dec 1, 2012 at 22:44
OzanOzan
4,40522 gold badges2323 silver badges3535 bronze badges
1
Awww, it appears git-annex does not have a windows build!
– Parad0x13
Dec 1, 2012 at 23:25
Add a comment
|
|
Okay, here's a weird one!
I have this nifty little 32gig thumbdrive that I keep in my wallet that contains all the latest of EVERY file I need to go about life on a computer. It contains all my BASH command references, everything from rubiks cube algorithms to PDFs I use for references and digital books and all the software I've ever written. Friend need their computer fixed all of a sudden? BANG! I got it covered with the software in my left pocket. Sweet huh?
Well I want to keep track of every change I make to the repo as well as have constant backups of it. Git works pretty well for this, except that it copies all my files and inflates the repo's size drastically! This isn't really a big problem but it is annoying. I know Git is used for software repos and I'm cool with that (Thats what I primarily use it for anyways)
So I was thinking, could I somehow as a git nub use git to record changes I made to the repo but NOT keep a running copy of every actual change but only keep the HEAD version? So when I do a push/pull the repo would be backed up with all my comments but ONLY what is in the active directory tree within the flash drive would be saved in the Git repository? So there wouldn't be duplicate binary entries, if I removed a .exe file it would log it but get rid of the file forever?
Maybe there is a better way of doing this without using git? For instance I WAS using 7ZIP to compress and make backups manually, then saving them to a cloud server for backup redundancy. That was time intensive and much more annoying than a simple:
git add .
git commit -m "changes"
(git gc)
git push
Any help would be greatly appreciated!
|
Git as a repo that doesn't have version control?
|
2
Have you tried looking at the documentation? Perhaps the "Data Recovery Reference"?
http://pic.dhe.ibm.com/infocenter/db2luw/v10r1/topic/com.ibm.db2.luw.admin.ha.doc/doc/c0006150.html
Share
Improve this answer
Follow
answered Oct 5, 2012 at 20:43
Ian BjorhovdeIan Bjorhovde
10.9k11 gold badge2828 silver badges2525 bronze badges
Add a comment
|
|
DB2 v10.1 database on WINDOWS 7.
Can somebody share about creating a database backup of the DB2? I could not find detailed instructions.
Thanks in advance for any help in this matter
|
DB2: How to backup a DB2 database?
|
Why not just do what every unix , mac , window file has done for years -- create a lockfile/working file concept.
When a file is selected for edit:
Check to see if there is an active lock or a crashed backup.
If the file is locked or crashed, give a "recover" option
Otherwise, begin editing the file...
The editing tends to do one or more of a few things:
Copy the original file into a ".%(filename)s.backup"
Create a ".%(filename)s.lock" to prevent others from working on it
When editing is achieved, the lock goes away and the .backup is removed
Sometimes things are slightly reversed, and the original stays in place while a .backup is the active edit; on success the .backup replaces the original
If you crash vi or some other text programs on a linux box, you'll see these files created . note that they usually have a dot(.) prefix so they're normally hidden on the command line. Word/Powerpoint/etc all do similar things.
|
I need to modify a text file at runtime but restore its original state later (even if the computer crash).
My program runs in regular sessions. Once a session ended, the original state of that file can be changed, but the original state won't change at runtime.
There are several instances of this text file with the same name in several directories. My program runs in each directory (but not in parallel), but depending on the directory content's it does different things. The order of choosing a working directory like this is completely arbitrary.
Since the file's name is the same in each directory, it seems a good idea to store the backed up file in slightly different places (ie. the parent directory name could be appended to the backup target path).
What I do now is backup and restore the file with a self-written class, and also check at startup if the previous backup for the current directory was properly restored.
But my implementation needs serious refactoring, and now I'm interested if there are libraries already implemented for this kind of task.
edit
version control seems like a good idea, but actually it's a bit overkill since it requires network connection and often a server. Other VCS need clients to be installed. I would be happier with a pure-python solution, but at least it should be cross-platform, portable and small enough (<10mb for example).
|
Are there a modules for temporarily backup and restore text files in Python
|
I presume that you mean that you want to be able to restore the data after user has somehow deleted the application and re-installed it again. Since each application has its influence inside the bounds of its sandbox, the only sane option would be to use server. In regards to simply answering your question - there is no way to do it as you've described.
|
i want to take the backup of all the data in the database of my iphone app in dropbox and restore it later.here the main problem is there is too much data in the app so if i am using sql queries to fetch data from sqlite3 database and storing it into the file and again reading data from that file and inserting it to database.can anyone suggest that how can i improve that or what is the best way to do that?
|
what is the best way to take the backup of data of sqlite3 database of iphone application?
|
I currently have this setup at work. We use a customer hosted SVN server that at times has had some connectivity issues. We also want to make sure we have a local copy of things as well. We setup a SVN repo locally, and then run the svnsync command that is executed as a result of a continuous integration service to sync changes from the remote repo to the local mirror. You can read more about the svnsync command here. This generates an exact replica of the repo as it basically plays back the changeset into the mirror. This is also helpful for us in that if the connectivity "issues" return, we have a readonly repo available to pull working copies from to continue work until connectivity is restored ( you can do a svn switch/relocate to work off the mirror while the master is down ). This may be more complex then you are looking for, but I love the setup we have and it has saved my bacon more than once. Figured I would share an option. Good luck
|
We have a SVN server here that over the years has accumulated hundreds of thousands of revisions and reached a size of >40 GB. We'd like to do a continuous backup on the repository, but it just takes too long to dump or copy the whole repository to our backup server each time.
Is there a way to add the latest revisions of the repository to an existing dump file automatically? I know with the --incremental option this can be done manually, but I was wondering if there was a command that would essentially work like so:
svnadmin dump repo --revision dumpfile_latest:repo_latest --incremental >> dumpfile
Here dumpfile_latest would be the revision number of the latest revision of the backup (named dumpfile) and repo_latest would be the latest revision number of the repository I am backing up (named repo). Thanks for any suggestions!
|
SVN continuous dump / backup
|
This is what I did to get the database and other folders to automatically sync to my USB drive every time I plug it in:
Install SyncToy made by Microsoft.
Add a folder pair to copy from the DB folder to the USB drive.
Filter files to only the files you want to copy.
Create a batch program with the content below called SyncMe.bat and save it on your USB drive
Open Event Viewer from the Control Panel.
Navigate to Applications and Services Logs > Microsoft > Windows DriverFrameworks-UserMode > Operational
Clear the log to make it easier to find the right events.
Plug in the USB drive.
Refresh the log, find the latest event that is specific to that USB drive, right-click, and Attach Task To This Event.
Add an action that runs the SyncMe.bat program you created
SyncMe.bat contents (update to reflect your paths and file names):
@echo off
if exist "G:\SyncMe.bat" goto fileexists
goto nofile
:fileexists
"C:\Program Files\Microsoft SQL Server\100\Tools\Binn\OSQL.EXE" -S computername\instancename -E -n -Q "master..sp_detach_db 'DatabaseName'"
"C:\Program Files\SyncToy 2.1\SyncToyCmd.exe" -R
"C:\Program Files\Microsoft SQL Server\100\Tools\Binn\OSQL.EXE" -S computername\instancename -E -n -Q "master..sp_attach_db 'DatabaseName', 'C:\Program Files\Microsoft SQL Server\MSSQL10_50.InstanceName\MSSQL\DATA\DatabaseName.mdf','C:\Program Files\Microsoft SQL Server\MSSQL10_50.InstanceName\MSSQL\DATA\LogName_log.ldf'"
goto end
:nofile
echo SyncMe.bat not found on G:\
goto end
:end
|
I have a small SQL database that is on my development PC only. I'd like to make frequent backup copies of it to a thumb drive. If I could have it do it automatically on a schedule when the thumb drive is detected, that'd be even better. What's a good way to do this?
|
Easiest way to copy development SQL Server to thumb drive
|
You will need
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
because of
File dir = new File("/mnt/sdcard/bcfile");
I wonder if you can ever access: /data/data/com.android.providers.telephony/databases/mmssms.db
|
I'm taking sms backup using this
public void smsbackup() throws IOException
{
InputStream in = new FileInputStream("/data/data/com.android.providers.telephony/databases/mmssms.db");
File dir = new File("/mnt/sdcard/bcfile");
dir.mkdirs();
OutputStream output = new FileOutputStream("/data/data/com.android.app/files/");
byte[] buffer = new byte[1024];
int length;
while ((length = in.read(buffer))>0)
{
output.write(buffer, 0, length);
}
output.flush();
output.close();
in.close();
}
It throws an exception like permission denied I don't know what permission will i give. Anyone tell me? Thanks in Advance.
|
What permission am missing?
|
The transactional backup interval likely refers to how often transaction logs for your TFS databases are backed up. The schedule you choose will probably depend on how busy your repository is.
At my current client there are six developers, and we share some of the load for source control between VSS and TFS (we're transitioning). Corporate policy says we must backup transaction logs every hour during business hours, and an additional one at midnight. Our local backups are on a four-day retention cycle with off-site backups lasting years.
I would make a decision based on how much work you'd want to lose if your repository was lost and your working copy was destroyed simultaneously (natural disaster?).
|
I'm in the process of setting up a backup plan for a Team Foundation Server. I downloaded Power Tools for TFS and I'm using the Backup Plan Wizard that was included in that pack. I am now at the step where I'm supposed to decide how to schedule the backups and I have no idea what to choose for my setup.
I get what everything means, except Transactional Backup Interval.
I would appreciate suggestions for a good schedule. What I would like to achieve is being able to restore and still look back a few versions, if possible. The minimum backup I would like to have is the latest version.
It might be important to add that I got to choose "Backup retention days" earlier and set that to 30.
|
What does Transactional Backup Interval mean in the Backup Plan Wizard for Team Foundation Server?
|
For windows batch scripting xcopy (native to windows) or robocopy (a free download in windows) both work extremely well.
An example xcopy script (in a .bat file):
@echo off
:: /L = list only.
:: /v=Verify, /y=No prompting /s=subdirs /z=Network mode (supports bad)
:: /i=Tells xcopy dest is folder /f=Display names /d=Copy only changed
echo Backing up projects...
xcopy e:\projects h:\projects /V /Y /S /Z /I /F /D
It will even support orphaned files (if you delete something from your source you no longer need a copy in the backup). Xcopy is typically fine for your needs until you deal with sync between NTFS and Fat32 file systems - the later only has a 2 second resolution and problems with daily savings time so you occasionally run into issues: (a) on time change day you might not get a backup of a changed file - depends on change-regularity of course or you might get a backup of all files even though none have changed (b) because of time resolution some files may backup even though they haven't changed.
|
I'm trying to create a script that will automatically backup a complete directory tree. I also want it to work for incremental backups. Basically, it wil work like this:
If file is in both source and destination and they are different, source file will be copied
If file is in both source and destination and they are the same, nothing will be copied
If file is only in the source, source file will be copied
If file is only in the destination, destination file will be deleted.
I'm still new to shell scripting and I'm not sure how I could implement this. Any ideas? Windows batch scripts would be better, but shell scripts that run on Cygwin are also fine.
|
Incremetal backups of directories with a batch/shell script
|
You can write a bash script that contains gzip, mysqldump and mail commands and have it run daily via cron job.
|
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I need to backup automatically daily my database in mysql and all image files on the server.
Is there a way to backup these things and send it on my email address ?
|
Automatic backup [closed]
|
I use getmail to periodically download the email by POP to a set of folders on my backup drive. GMail can be configured such that it doesn't delete or archive emails when they're downloaded, so it has no effect on the web interface.
|
What's a good way to backup old emails? In my case, I own a domain name that forwards all email to a Gmail account. I'm afraid that Gmail will one day go away, start charging, or lose my emails. It'd be nice to have the ability to search the emails that have been archived.
Thanks
|
Backing Up Email
|
2
Check the archive bit out. It may be what you want.
In .NET it's System.IO.FileAttributes.Archive, which can be used with SetAttr in VB, or System.IO.FileInfo.Attributes or System.IO.File.SetAttributes().
Any algorithm that checks the last modified time or archive bit will depend on the number of directories on the drive. Since both attributes are stored in the directory, the timing will depend on the filesystem and its level of caching. A more efficient way to analyse backup efficiency may be to look at the number of blocks that have changed.
Share
Improve this answer
Follow
edited May 3, 2009 at 23:12
answered May 3, 2009 at 22:57
MarkMark
6,28122 gold badges3636 silver badges3434 bronze badges
Add a comment
|
|
I am looking at writing a program (.Net)for backing up files on a computer. How would I go on about knowing what files have changed to backup (I don't really want to scan the last modified dates each time.
Are there any backup algorithms for backing up only the bits of a file have changed. What are the O notations for the algorithm?
|
Backup Algorithm
|
3
Take a look at django-chronograph. It has a pretty nice interface for scheduling jobs at all sorts of intervals. You might be able to borrow some ideas from that. It relies on python-dateutil, which you might also find useful for specifying repeating events.
Share
Improve this answer
Follow
answered Mar 6, 2009 at 17:39
DaveDave
3,20111 gold badge2626 silver badges2323 bronze badges
2
+1 for django-chronograph, and it uses dateutil to provide a rich time specification language.
– Van Gale
Mar 7, 2009 at 8:37
dateutil looks really powerful, but I don't think I'll use it for this project to keep the dependencies down. With the method described above I don't need anything more complicated than a bit of maths!
– Rob Golding
Mar 7, 2009 at 16:09
Add a comment
|
|
I am writing a backup system in Python, with a Django front-end. I have decided to implement the scheduling in a slightly strange way - the client will poll the server (every 10 minutes or so), for a list of backups that need doing. The server will only respond when the time to backup is reached. This is to keep the system platform independent - so that I don't rely on cronjobs or suchlike. Therefore the Django front-end (which exposes an XML-RPC API) has to store the schedule in a database, and interpret that schedule to decide if a client should start backing up or not.
At present, the schedule is stored using 3 fields: days, hours and minutes. These are comma-separated lists of integers, representing the days of the week (0-6), hours of the day (0-23) and minutes of the hour (0-59). To decide whether a client should start backing up or not is a horribly inefficient operation - Python must loop over all the days since a time 7-days in the past, then the hours, then the minutes. I have done some optimization to make sure it doesn't loop too much - but still!
This works relatively well, although the implementation is pretty ugly. The problem I have is how to display and interpret this information via the HTML form on the front-end. Currently I just have huge lists of multi-select fields, which obviously doesn't work well.
Can anyone suggest a different method for implementing the schedule that would be more efficient, and also easier to represent in an HTML form?
|
What is the best way to represent a schedule in a database, via Python/Django?
|
It's probably not quite what you're after, because I don't think what you're after is possible.
However you could place the tables in question into a different file group. Then when it comes to restoring, you need only restore the file group that relates to the tables.
File Groups in SQL SERVER
|
is it possible to restore individual tables from a full backup file of Microsoft SQL Server 7 (yes, I know this is really old, but our client can't upgrade for various reasons).
The total backup file is about 180GB in size and restoring the whole database once a week to a development server is not pratical, as it takes several days (literally). But for development, we'd just need some tables out of this huge file.
Is it somehow possible to extract only the tables we need from the backup file?
Thanks in advance!
Best regards,
Martin
|
Restore individual tables on Microsoft SQL Server 7?
|
iDrive is Great and free under 2 gigs
|
I admit this is not strictly a programming question, although I do use my WHS as a source repository server for home projects, and I'm guessing many other coders here do as well.
Does anyone have a recommendation for a good backup solution for the non-fileshare portion of Windows Home Server? All the WHS backups I've seen handle the fileshares, but none of the system files or other administrative stuff on the box.
Thanks,
Andy
|
Windows Home Server backup solution
|
2
Beginning with MariaDB 10.11 (and MariaDB Connector/C 3.3.8) the client side gssapi authentication plugin is now statically linked against the client library (which is used by all client tools)
There is one exception which is mariadb-backup - it doesn't use client library but the embedded server library (libmysqld), which is not linked against client gssapi plugin.
I filed an issue (MDEV-33192) in MariaDB Bug Tracker
Until this bug is fixed there are several workarounds:
Install auth_gssapi_client.dll from Connector/C <= 3.3.7 or from Server <= 10.10
Use another authentication method, e.g. native_password or ed25519.
Share
Improve this answer
Follow
answered Jan 7 at 15:14
Georg RichterGeorg Richter
6,79822 gold badges1515 silver badges1616 bronze badges
Add a comment
|
|
installed MariaDB 10.11 on Windows. Try to do backup using
$ mariabackup --backup --target-dir=F:/backup --user=root --password=
https://mariadb.com/kb/en/full-backup-and-restore-with-mariabackup/
returns with
Failed to connect to MariaDB server: Authentication plugin 'auth_gssapi_client' cannot be loaded:
But on Mariadb web site it says this plugin is included with 10.11
Any ideas?
|
MariaDB 10.11 Windows Backup Authentication plugin 'auth_gssapi_client' cannot be loaded
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.