Response
stringlengths 8
2k
| Instruction
stringlengths 18
2k
| Prompt
stringlengths 14
160
|
---|---|---|
1
I don't think it's a good idea to put backup files in subversion. Subversion is designed as a version control system, backups don't need to be version controlled.
Instead include the redmine backups in your normal backup schedule, and also make sure the Subversion repository is backed up.
Share
Improve this answer
Follow
answered Oct 13, 2010 at 20:50
Sander RijkenSander Rijken
21.5k33 gold badges6363 silver badges8686 bronze badges
Add a comment
|
|
I installed and set up Redmine project management application. Now I need to setup its backups (which include database dump + "files" directory). But I have a question:
Do I have to check in my Redmine backups into my SVN repository or not?
|
Should I add (check in) my project management system backup to repository?
|
You can do this using the 'import_template' property (documented here) instead of 'import_transform':
- property: __key__
import_template: "%(first_name)s %(last_name)s"
|
I am converting a script to use the new bulkloader. (What was wrong
with the original bulkloader? - I prefer writing Python to editing
configuration files...)
Anyway, I want to prevent duplicates by assigning a combination of
properties to the key.
The docs say:
If you want to use or calculate a key
from the import data, specify a key
using the same syntax as the property
map; that is, external_name,
import_template, and so on.
All the examples apply a transform to the current property. How do I
instead use a combination of other properties?
Should be something like:
- property: __key__
external_name: key
import_transform: entity.first_name + entity.last_name
|
set key with new bulkloader
|
You can use:
date +%m.%d.%Y
To get the current date (e.g. 10.10.2010)
To include it within a command, you can do:
mkdir ~/myuser/`date +%m.%d.%Y`/backup
Note that the those are tick marks.
|
CentOS 5.3
I have a script file that works for backing up a repository and I am using rsync to copy the contents to the source directory.
I have a crontab job that runs every night at 12am that calls my script file. repos_backup.sh
rsync -razv [email protected]:repos /home/backup_sys_user/repos_backup
However, this puts all the repositories in repos_backup directory. However, I am looking for a way to create a new directory based on the date the backup was done. So I should have a directory structure like this:
/repos_backup/10.10.2010
/11.10.2010
/12.10.2010
I haven't done much scripting before, is there anyway to do this.
Many thanks for any advice,
Steve
|
Writing a script file to create a new folder based on the current date
|
Depends what you mean. Backing up a file could be as simple as copying it somewhere else.
cp myfile /my/backups/myfile.bak
|
how i can take backup of file on UNIX-AIX ?
|
how i can take backup of file on UNIX-AIX?
|
1
I wouldn't really call it a backup, but look at exp/imp and expdp/impdp (data pump) in the Utilities manual
Share
Improve this answer
Follow
answered Sep 9, 2010 at 0:06
Gary MyersGary Myers
35.2k33 gold badges5050 silver badges7575 bronze badges
Add a comment
|
|
I need to automate a selective table / user object backup I currently am doing via PL / SQL Developer.
The way I currently do it is via Tools/Export Tables and Tools/Export User Objects, manually select tables / objects, then set the options, choose destination and export. I do this from a windows laptop and the database is located in a suse linux server, both are in the same LAN. DB is running 24/7 and can not be shutdown. Also currently my oracle programming skills are very basic as I only do maintenance to this solution. I would like to keep doing the backup process in the windows laptop, but I would consider a server side script solution also and then retrieving the .sql files from server.
Thanks in advance
|
Selective tables/objects Oracle Backup
|
1
Here's one way that still uses groovy, but not GORM. Since it is not related to a specific object, I wouldn't be concerned that you aren't using GORM. You can of course also drop down into java JDBC directly.
def conn = new groovy.sql.Sql((java.sql.Connection)
AH.application.mainContext.sessionFactory.currentSession.connection())
try
{
conn.execute("YOUR SQL STATEMENT")
}
catch (Exception e)
{
System.out.println "Error " + e.toString()
System.out.println proc
}
conn.close()
Share
Improve this answer
Follow
edited Dec 3, 2014 at 6:12
ABC
4,3231010 gold badges4646 silver badges7474 bronze badges
answered Sep 9, 2010 at 12:06
AndrewAndrew
2,35344 gold badges2929 silver badges4242 bronze badges
1
Looks like you need to reformat the post a bit to get the first couple of lines into the code snippet (indent four spaces).
– Matt Lachman
Sep 9, 2010 at 12:39
Add a comment
|
|
I want to backup HSQLDB from grails,
Here's the command BACKUP DATABASE TO 'C:/BACKUP/' BLOCKING
But how to do this in GORM where all seems Entity related even
executeQuery ?
Thank you for sharing your experience :)
|
How to execute BACKUP DATABASE query from grails?
|
I've finally found a really "simple-to-use" solution:
ncftp
http://www.cyberciti.biz/tips/linux-download-all-file-from-ftp-server-recursively.html
|
I'm looking for some backup solution. My request is pretty simple:
Source - FTP credentials (ftp://user:[email protected]/dir1/dir2)
Destination on local HDD (/var/backup/server-tld)
Possibility of packing to archive (tar.gz/zip)
Plan this "script" as a cron job with defined period (e.g. once a day)
I know, that all this can be done using bash scripts, but it seems to be a little bit uncomfortable.
I don't believe there's no simple solution for this.
|
Remote backup solution for openSuSE
|
1
No, you can't. What you could do, is setting up WAL archiving to make incremental backups:
http://www.postgresql.org/docs/current/static/continuous-archiving.html#BACKUP-ARCHIVING-WAL
This can only be done for the whole cluster, not for a single database.
Share
Improve this answer
Follow
answered Aug 6, 2010 at 17:23
Frank HeikensFrank Heikens
122k2525 gold badges146146 silver badges138138 bronze badges
Add a comment
|
|
I have a decent sized PostgreSQL database (approx 6GB & growing). A full backup/export of the database is done every few hours via cron & pg_dump. Specifically, can I export only the changes to the database since the last export? Or perhaps run a utility that compares the two exports and appends the differences to the original, etc? I'm trying to save disk space and "cloud" transfer time.
|
Export only new data since the last PostgreSQL database export
|
You could use Java applets: http://www.captain.at/programming/java/
|
For an automated web-backup solution we're thinking of, how can we access and copy a "locked" file from a user's system onto our webserver? This is of course with the user's permissions.
Just seeking code that demonstrate this.
Added: I know where the file is located, I do NOT want to prompt the user to select the file everytime. It should be that he accesses the website and the site automatically backups the file for him (a first time setup is fine)...
|
Browser script to automatically copy a "locked" file from a user's system (Windows)
|
You need to tell sharepoint where your new database is. just go to the content database management page on central administration. there you will see your previous database mapped to the web application. remove it and and the new database. it will map the web app to it and your sites will come back. be careful when entering the server and database name, as you could create a new db instead of connecting to the existing one if you mistype the name
|
I have sharepoint sites provisioned on two machines A and B. I would like to take the content database from machine A and restore it into the site on machine B.
I used SQL backup to backup machine A's database, and restored it to machine B, overwriting the existing content database. However, my sharepoint site became unreachable - I would get a generic site not found error. Did I also have to back up and restore SharePoint_Config database too?
What is the best practices for this kind of scenario?
|
Backing up and restoring a sharepoint content database across two machines, hosed my sharepoint app
|
system($command); executes the external mysqldump command.
I don't really understand the script you pasted, it looks like a mixture of two scripts?
Anyway, the syntax errors in the dump file probably mean that you have a version mismatch (4.x running on the target server?).
Use the compatible switch to generate dump files that work in the target version, .e.g.
--compatible=mysql40
|
I've tried using mysql dump in the command line and it worked. How do I do it in php?
I found this code from the internet, and tried it.
<?php
ob_start();
$username = "root";
$password = "mypassword";
$hostname = "localhost";
$sConnString = mysql_connect($hostname, $username, $password)
or die("Unable to connect to MySQL");
$connection = mysql_select_db("test",$sConnString)
or die("Could not select DB");
$command ="C:\wamp\bin\mysql\mysql5.1.36\bin --add-drop-table --host=$hostname --databases test > C:\wamp\www\test\tester.sql";
I don't really understand what this following code means:
system($command);
$dump = ob_get_contents();
ob_end_clean();
$fp = fopen("dump.sql", "w");
fputs($fp, $dump);
fclose($fp);
?>
I imported the generated .sql file to a database and I got this error from phpmyadmin:
There seems to be an error in your SQL query. The MySQL server error output below, if there is any, may also help you in diagnosing the problem
ERROR: Unknown Punctuation String @ 3
STR: :\
SQL:
E:\Users\Nrew\Documents>set path=C:\wamp\bin\mysql\mysql5.1.36\bin;
E:\Users\Nrew\Documents>set path=C:\wamp\bin\mysql\mysql5.1.36\bin;
E:\Users\Nrew\Documents>set path=C:\wamp\bin\mysql\mysql5.1.36\bin;
SQL query:
E:\Users\Nrew\Documents>set path=C:\wamp\bin\mysql\mysql5.1.36\bin;
MySQL said:
#1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'E:\Users\Nrew\Documents>set path=C:\wamp\bin\mysql\mysql5.1.36\bin' at line 1
Please help, I want to learn how to do this in php.
|
How to use mysql dump in php
|
Creating a database backup will not cause rows to be deleted. Something else must be happening to cause that behavior.
Do you know that the rows disappear at (about) the same time as the backup is being made? Perhaps within +/- minutes, hours, or days? Can the problem be replicated, or does it appear to occur randomly? (How long does it take to perform the backup? Does this occur for complete, differential, and/or transaction log backups?)
I'd recommend running SQL Profiler before, during, and after the backup (during that +/- window), and watch carefully for events that might delete rows. You may have to do this for each backup for a while, until you hit an occurance of the actual problem.
|
I just noticed two months after switching out a backup drive that one table in one of the backed-up databases is losing records past a certain point.
The database is backed up weekly.
Prior to the new drive, the table had records from 3/11/2010 to 6/8/2010.
After the first backup ran, the table was missing all records past 3/11/2010,except for a single record or two created the day before the backup.
The records started accumulating at that point without incident until 3 backups later, a month since the first backup that coincided with data loss. At this point, all records past 3/11/2010 were again missing (except for one or two that were created right before the backup).
This is just affecting one table in the database and it doesn't happen with every backup, just with the ones happening around 6/11 and 7/11.
Any ideas? I'm completely stymied about how to even diagnose this. Other databases on the same backup drive seem unaffected, and other tables in this database are unaffected.
|
SQL Server backup causes recent table records to disappear for one table
|
1
You could put a delay in your backup.php script that ensures a maximum backed up records per second rate or similar, ie using sleep().
Share
Improve this answer
Follow
answered Jul 12, 2010 at 8:56
gautehgauteh
16.7k44 gold badges3030 silver badges3838 bronze badges
Add a comment
|
|
On my database server there is a cronjob that backups all the databases in a way that makes it easy to restore them.
It is something like this:
0 5 * * * /usr/local/bin/backup.php
The problem is that the website (using that db server) is very slow during that process.
Even, Pingdom sends me a 'website down' alert at the start of the process.
To solve the problem, I have tried this change:
0 5 * * * /bin/nice -n 19 /usr/local/bin/backup.php
but it doesn't seem to improve the situation.
How is that possible?
How would you solve the problem under these requirements?
1. no purchase of any hardware
2. easy to implement and maintain
3. no proprietary solutions
|
Generation of backup puts my website down - is there an easy solution?
|
1
The "restore database;" command will read the backup from the backup media media so that your database files are exactly like they were when the last backup was taken. It does not restore control files.
The "recover database;" command will apply incremental backups (not applicable - your example only has a full backup) and apply archive logs (also not applicable, you're in "NOARCHIVELOG" mode.) It may also write to the control files - if it does, you can see why it's required.
After the restore/recover/open commands you issued in your question your database is as it was at the time of the backup. Any transactions committed after the backup are lost and can't be recovered because you're in "NOARCHIVELOG" mode. You need to be in "ARCHIVELOG" mode to do a complete "point in time" recovery.
byw, what files, if any did you delete, rename or move to really simulate a true media failure? I'll bet you didn't delete one of your control files. You need to practice that scenario.
Share
Improve this answer
Follow
answered Jun 30, 2010 at 15:38
redcayugaredcayuga
1,24166 silver badges44 bronze badges
1
What's a "control file"? Actually, I don't really need to know! I'm just trying to work with a simple Oracle Express database for a course on SQL that I'm studying. I would like to simply copy all the database, as is, from one of my computers to another. I don't want to have to understand anything about how Oracle databases work! Is it possible to have a "Just backup everything up and stop asking me questions and give me a file I can take to another machine" script? :-)
– Aaron McDaid
Jan 26, 2014 at 13:18
Add a comment
|
|
I would like to backup an Oracle 10G as simple as possible.
It is in NOARCHIVELOG mode and I can shut down for backup (it is only a development server).
After reading tons of documentation abour rman I tried this way in rman:
shutdown immediate;
startup mount
backup database;
sql 'alter database open';
As I see it works fine, list backup shows backups.
Than I made some modifications (droping some tables, adding data) and I tried to restore backup:
shutdown immediate;
startup mount
restore database;
recover database;
sql 'alter database open';
It also seems to work fine but I can't get back the previous state of the database. I don't understand why. I also don't understand why need to use recover.
Thanks
Hubidubi
|
oracle rman simple backup
|
cp -pur seems to do the trick!
|
I'm trying to backup a folder (e.g) /web/sites to /backups/websites and retain attributes, etc.
cp -Rp retains the information, yet isn't incremental
rsync -va seems to be great for incremental yet doesnt retain the attr/owner
Is it possible to tar, and pipe it through and untar whilst retaining attr/owner, if so, how can I do this? or is there a better solution?
|
incremental backup of folder in linux whilst retaining attributes/ownership
|
For my personal sites, I use a ColdFusion scheduled job that runs a mysqldump, and then stores the updated backup in a dropbox account. I've never bothered encrypting the backups, though that does seem like a potential hazard. What if the encrypted file becomes corrupted? Then you can't even get a partial restore from uncorrupted sections of the file.
|
CF9, Windows Server 2008 Standard, IIS7, mySQL 5.1.48 community.
I have managed to get CF to take a database mySQLdump which I was going to run as a nightly cfschedule task, with a server time based lock on the application controlled in application.cfc
That will get me a local copy, but whats the best strategy to encrypt the datadump.sql text file (and what would you use to do so for sensitive personal information) and transfer to an off site location, cfftp?
|
Coldfusion and mySQL - seeking recommendations for general and off site backup strategy
|
1
There's no such gem b/c user data is very specific to each project.
You will have to write it yourself.
Share
Improve this answer
Follow
answered Jun 17, 2010 at 8:38
zed_0xffzed_0xff
32.7k77 gold badges5454 silver badges7272 bronze badges
Add a comment
|
|
I am currently working on a project and i would like my users to be able to backup/restore theirs accounts.
I am looking for a rails plugin/gem that would easily do that, ie :
current_user.backup()
=> backup_file
current_user.restore(backup_file)
=> database import/replace
I don't know if my question is very clear, but i would like to backup every user's related object (posts, comments, etc) and to be able to restore them from a backup file.
Thanks per advance,
Cédric.
|
user's account backup and restore
|
1
Your backup user need the RELOAD privilege and SELECT+LOCK TABLEs for each schema you need to backup.
The easiest way is to create a user bound to localhost only (eg. back@localhost) and use SSL certificates to authenticate user against the server (if your mysql installation has SSL support).
I am not sure you can use the Host authentication to login into your MySQL server. If I try on my GNU/linux box, typing mysql -p ask for the password of the currently connected user but I need to type in my password...
Hope that'll help a little.
Share
Improve this answer
Follow
answered May 18, 2010 at 22:33
Xavier MaillardXavier Maillard
95877 silver badges1515 bronze badges
Add a comment
|
|
What is the best/secure way to backup a mysql database on windows server (2008)? I have "MySQL Administrator" but that requires that you save passwords for backup project. I'm not keen on doing as anyone gaining access to the server would then have easy access to the database. Can you do anything similar to SQL Server like using Windows authentication. If not what is the most secure (and practical) way of backups. Lastly, what are the privileges needed to backup a database? I have created a single user just for this task.
Please advise.
|
Secure way to backup MySQL databases?
|
From the TFS Admin console, have you tried going into your Reporting section underneath the Application Tier? There is an option for "Start Rebuild", which should rebuild your warehouse, models, and reports based upon what is currently in the TFS database(s).
If that does not work, you could always try stopping the jobs, deleting the databases and then starting the jobs back up. I believe that it will rebuild them from scratch at that point. I only have my production 2010 instance up and running, and not a development rig anymore, otherwise I would test this for you.
|
I recently set up a sandbox TFS to test TFS-specific features without interfering with the production TFS. I was happy I did this sooner than I thought--I hadn't been backing up the encryption key from SSRS and upon restoring the reporting databases, they remained inactive, requiring initialization that could only come from applying the encryption key. Said encryption key was lost when I nuked the partition after backing up the TFS databases.
The only option I seemed to have is to delete the encrypted data. I'm fine with this, since there wasn't much in there to begin with, however once they're deleted I'm not quite sure how to configure TFS to recognize a new installation of these services while using the restored versions of everything else. Unfortunately, the TFS help file doesn't seem to account for this state though. Is there a way to essentially rebuild the reporting and analysis databases? Or are they gone forever?
|
Is my TFS2010 backup/restore hosed?
|
I've made a tool for this here http://andymdn.com/2010/10/11/pleskdump-a-plesk-backup-extraction-utility/. Hopefully it helps you.
|
I created a backup file in Plesk
panel, downloaded it.
Renamed file to .zip and uncompressed with WinRar.
Opened unzipped file in Thunderbird
email client and extracted
"site.httpdocs" file.
How do I extract actual files from it? It seems like it is some sort of text document with all files dumped together.
Thanks.
|
How to extract files from Plesk backup under Windows?
|
1
Assuming it's a Microsoft SQL Server Database, then you can backup the database to a single file using the BACKUP DATABASE command.
Backing Up: http://msdn.microsoft.com/en-us/library/ms186865.aspx
Restoring: [same URL as above, not got enough rep] /ms186858.aspx
Backup Example:
BACKUP DATABASE AdventureWorks
TO DISK = 'Z:\SQLServerBackups\AdvWorksData.bak'
WITH FORMAT;
GO
You could write this into a stored procedure and then call it in VB using a SQLCommand object. Here's a basic example:
Dim objCommand As SqlCommand = Nothing
Dim objConnection as SQLConnection
Try
objConnection = new SQLConnection(sConnectionString)
objConnection.Open()
objCommand = New SqlCommand("P_YOUR_INSERT_SPROC", mobjConnection)
objCommand.CommandType = CommandType.StoredProcedure
objCommand.Parameters.Add(New SqlParameter("@SomeParam", pParamValue))
objCommand.ExecuteNonQuery()
Return True
Catch ex As Exception
Throw
Return False
Finally
objCommand = Nothing
If objConnection.State = ConnectionState.Open Then
objConnection.Close()
End If
End Try
If you need to move the backup off the server and bring it back down locally then you can use something like FTP or something to bring the actual file down. Or.. if you just wanted to store it remotely and be able to restore it at will then you can name it to something which you can store, which gives you enough information to generate the RESTORE function.
Share
Improve this answer
Follow
edited Mar 26, 2010 at 9:46
answered Mar 26, 2010 at 9:35
Tom MorganTom Morgan
2,3551818 silver badges3030 bronze badges
Add a comment
|
|
I am making a VB.NET application that can download/backup the database that is currently on a remote server.
I have Remote Server IP,Username,Password and Database name. I am also able to connect to it.
But i don't know what to do after connecting to it. I don't know what all files are need to be backed up. ( i think database and log file both must be backed up, i am not sure )
please let me know that basic commmands that i will need to backup the whole database.
Thanks in advance.
|
Backing Up Database from Remote Server to Local in VB.NET
|
1
In SQL Server 2000, you can use this query to list out the complete text of stored procedures, they can span multiple rows.
SELECT
o.name,o.id,o.xtype, c.colid, c.text
FROM dbo.sysobjects o
INNER JOIN dbo.syscomments c ON o.id = c.id
WHERE o.xtype = 'p'
ORDER BY o.Name,c.colid
I would be easier to use Enterprise Manager to script all the procedures though. In Enterprise Manager, right click on the database to you want to capture all the procedures from. An options list will pop-up, select "All Tasks" then "Generate SQL Script...". A dialogue box will appear, click on "show all", you can then refine the list of objects to script, by using the check boxes. Select the objects on the left side and click on the "Add>>" to move them to the script list. You can set formatting and other options, then click OK when done.
In SQl Server 2005+ you can use this query to list the complete text of all stored procedures, views and functions:
SELECT
LEFT(o.name, 100) AS Object_Name,o.type_desc,m.definition
FROM sys.sql_modules m
INNER JOIN sys.objects o ON m.object_id=o.object_id
you can take this output and save it if you like.
However, it is easier to use SQL Server management Studio to script out all procedures.
Share
Improve this answer
Follow
edited Mar 23, 2010 at 14:13
answered Mar 23, 2010 at 13:45
KM.KM.
103k3434 gold badges180180 silver badges212212 bronze badges
1
but how to make a back up of the stored procedures
– subash
Mar 24, 2010 at 3:43
Add a comment
|
|
What is the query to backup the stored procedure of a database in SQL Server 2000?
|
Backup Stored Procedures
|
1
Check the comment in this JIRA-Issue about "Adding field overwrites changes in *.jspx". By default, the automatically created views (.jspx) are maintained by Roo. You can
turn this off by setting "automaticallyMaintainView = false" and edit everything freely (of course, there will be no changes made to the views anymore if you add a field to an entity)
use your own jspx files that Roo doesn't even know about
wait until
Roo 1.1 in which the Roo support for
views is moving to the element level (as I understand it),
see this issue where it is explained
in the comments. I think this will allow for .jspx files which are partly user- and partly roo-managed.
Share
Improve this answer
Follow
answered Apr 13, 2010 at 12:38
WolframWolfram
8,05433 gold badges4646 silver badges6868 bronze badges
Add a comment
|
|
I generated spring roo project and modifies .jspx files to my styles. Unfortunately, when i used the backup command, spring roo was auto-generated files to the original one. Thus, my .jspx files are noy my styles. How should i do to recovery my files back from this command.
|
spring roo backup command lost my files
|
Only if you have replicated slaves, or you used to, and have binary logs. Even then you'd need an old copy of the database you can restore, and to configure replication again.
|
I'm almost certain about the answer, but the situation is so critical that I have to ask this question even though I'm %99 sure about the answer.
Someone in our office made a backup of a MySQL database and he restored it on a wrong destination database overwriting everything on that destination (The schema of both databases were the same). According to the structure of the MySQL backup files I know that the restore operation drops all the tables first and then creates them and fills them up with the backed up data. The question is does the restore module keeps the old data anywhere? Is there anyway of retrieving any of the old data? (logs?.. etc.)
|
MySQL Wrong Restore Data Recovery
|
Also a very good kick-starter into RMAN backup/restore: http://www.orafusion.com/art_rman1.htm
|
I want to automate the periodic backup and restore of the Oracle 10g Database.Please, someone help me immediately.
and please note that I want the task to be performed from the command line scripts.
|
How to create and restore backups of the Oracle 10g database by using commanline scripts automatically?
|
1
For this batch to properly run, make sure you enable delayed expansion. Just add setlocal ENABLEDELAYEDEXPANSION at the beginning of your batch file.
But it will not "get the DNS Server entries into environment variables", which I believe is a different question.
Share
Improve this answer
Follow
answered Dec 29, 2009 at 16:38
PA.PA.
28.8k1010 gold badges7373 silver badges9797 bronze badges
Add a comment
|
|
I have a problem with this script here
for /f "tokens=3" %%a in ('netsh interface ip show config ^| find /i "DHCP Enabled"') do set DHCP=%%a
If /i "%dhcp%" == "Yes" (
REM do command here
) Else (
REM script to backup DNS servers to environment variables
)
I've tried numerous ways using the first for /f example to try and get the DNS Server entries into environmental variables to be used later.
So basically I'm looking for a way to backup the dns server/s to environment variable/s (primary/secondary DNS) if DHCP is disabled.
|
Backup DNS Server/s
|
remote_api, which the bulkloader uses, is written to deliberately require authentication, even if you omit the relevant clause in app.yaml. You can override it if you really want, but it's an incredibly bad idea - it would allow any anonymous user to do practically anything they liked to your app!
|
Is there some way or using the bulkloader.py dump and restore functionality without authentication?
I have tried using:
- url: /remote_api
script: $PYTHON_LIB/google/appengine/ext/remote_api/handler.py
without the login-parameter, but login still seems to be required.
I still get
[ERROR ] Exception during authentication
I struggled with this for 6 hours yesterday, without any solution.
And yes, I have tried GAEBAR. It failed, however when it got to entities that contain up to 1MB (the maximum pr. entity) Blobs.
So, I am looking to dump (and restore) for backup-purposes mainly.
|
bulkloader.py --dump without authentication
|
1
How about using --result-file= parameter to mysqldump? It is anyway recommended on Windows to avoid problems with newlines.
Share
Improve this answer
Follow
answered Dec 14, 2009 at 15:17
Michal ČihařMichal Čihař
9,95666 gold badges5050 silver badges8888 bronze badges
Add a comment
|
|
I have problem when backing up mysql database. When I use the following command, it works fine
C:\Program Files\MySQL\MySQL Server 5.0\bin mysqldump.exe
--user=dinesh
--password=accounting
--host=dinesh
-C
--routines
--default-character-set=utf8
--Opt inventory
> C:\\R14122009_12469.Sql
But when I pass Path like
"C:\Documents and Settings\Wild\Desktop\f report\New Folder\R14122009_12469.Sql"
It shows an error table not found
Is there any way where I can save the backup at any place at runtime selection?
|
IProblem with backing up mysql database
|
1
Yep, rsync:
http://librsync.sourceforge.net/
Or if you really want a complete backup (rather than sync) codebase, use the source of rdiff-backup.
Share
Improve this answer
Follow
answered Oct 24, 2009 at 21:50
Lee BLee B
2,1571212 silver badges1616 bronze badges
Add a comment
|
|
I'm looking for a commercial/open source backup library in C++.
I have seen the Microsoft sync Framework but unfortunatly it requires the .net framework to be installed...
Thank you
Jonathan
|
C++ sync and backup framework
|
1
You need to RESTORE from files (which are contained in the backup set) rather than the backup set directly. The bottom example is to copy a database, but the idea is the same.:
BACKUP DATABASE AdventureWorks
TO AdventureWorksBackups ;
RESTORE FILELISTONLY
FROM AdventureWorksBackups ;
RESTORE DATABASE TestDB
FROM AdventureWorksBackups
WITH MOVE 'AdventureWorks_Data' TO 'C:\MySQLServer\testdb.mdf',
MOVE 'AdventureWorks_Log' TO 'C:\MySQLServer\testdb.ldf';
GO
Share
Improve this answer
Follow
answered Oct 12, 2009 at 19:35
Stuart AinsworthStuart Ainsworth
12.9k4242 silver badges4646 bronze badges
Add a comment
|
|
i am using SQL server express 2005 as an backend. I created a backup file programmatically.If i use same server , then it restore the data successfuly. however if we try to restore on different server, then it fails. and throw following message
"The Backup set Holds a backup of a database other than the existing 'DatabaseName' database. RESTORE DATABASE is terminating abnormally."
On both server, Sql server instance name and database name is same.
Please suggest how can i resolve this error
|
VB.NET restore Backup file created on one server to another server
|
1
rsync still has to calculate block hashes to determine what's changed. It may be that the no-modification case is a shortcut looking at file mod time / size.
Share
Improve this answer
Follow
answered Jul 31, 2009 at 18:27
Jason SJason S
187k167167 gold badges624624 silver badges980980 bronze badges
Add a comment
|
|
I'm trying to use rsync to backup MySQL data. The tables use the MyISAM storage engine.
My expectation was that after the first rsync, subsequent rsyncs would be very fast. It turns out, if the table data was changed at all, the operation slows way down.
I did an experiment with a 989 MB MYD file containing real data:
Test 1 - recopying unmodified data
rsync -a orig.MYD copy.MYD
takes a while as expected
rsync -a orig.MYD copy.MYD
instantaneous - speedup is in the millions
Test 2 - recopying slightly modified data
rsync -a orig.MYD copy.MYD
takes a while as expected
UPDATE table SET counter = counter + 1 WHERE id = 12345
rsync -a orig.MYD copy.MYD
takes as long as the original copy!
What gives? Why is rsync taking forever just to copy a tiny change?
Edit: In fact, the second rsync in Test 2 takes as long as the first. rsync is apparently copying the whole file again.
Edit: Turns out when copying from local to local, --whole-file is implied. Even with --no-whole-file, the performance is still terrible.
|
rsync and MyISAM tables
|
1
cd /shares ; find . -type d -maxdepth 1 -exec rar a -v1g -m0 -ow '-ag[dd-mm-yy]' '/backupdir/{}' '{}' ';'
The find command searches for directories (-type d) non-recursively (-maxdepth 1) in "/shares" and executes (-exec) the rar command. The '{}' is replaced by the name of the directory found. I'm not sure about all your rar switches but if the command below works then the find command should do what you want:
rar a -v1g -m0 -ow -ag[dd-mm-yy] /backupdir/Folder1 /shares/Folder1
Share
Improve this answer
Follow
answered Jul 14, 2009 at 23:05
p00yap00ya
3,6691919 silver badges1717 bronze badges
Add a comment
|
|
I'm new to Linux commands and spend most of my time in VB. After searching the web it's hard to find a solution in Google.
Anyway, everyday I backup my Shares folder and ends up being 183 GIG. I tried many ways of backing it up and come to the conclusion that using rar was the best option for my enviroment. So this is the command I use:
./rar a -v1g -m0 -ow -ag[dd-mm-yy] Shares "/shares"
The result I get is a lot of part files "Shares[15-07-09].part01.rar" which is fine.
What I really want to do now is to backup each folder within the shares directory, so I get something like:
Folder1[15-07-09].part01.rar
Folder2[15-07-09].part01.rar
Folder3[15-07-09].part01.rar
Well I hope you guys can help with a simple script that I should be able to understand.
|
RAR Each folder in directory
|
There's a program called dump that does something similar, but it operates on filesystem blocks rather than files. rsync also may be of interest.
You will need to keep track of a large number of blocks with multiple versions and how they fit into the various versions of the original files, so you will need some kind of database to track this information, and an efficient way to query it to determine which blocks in a given file need to be transferred. Also note that adding something to the beginning of a file will cause all your blocks to be "new" if you use a naive blocking and diff scheme.
To do this well will be very complex. I highly recommend you thoroughly research already-available solutions, and if you decide you need to write your own, consider the benefits of their designs carefully.
|
I am working on the developement of a application that will perform online backup of the files and folder in the PC, automatically or manually. Currently, I was keeping only the latest version of the file at the server.Now, I have to implement the versioning so that only the changes can be transfered to the online server and user must be able to download any of the available version of the file at Backup Server.
I need to perform Deduplication for this. Guys, though I am able to perform it using the fixed block size but facing an overhead of transferring the file having CRC information with each version backup.
I have never worked on such technology , so lacks in experience. I am eager to know is there any feasible method to embedd this functionality in the application without much pain. Is any third party tool would help to perform same thing? Please let me know?
Note: I am using FTP protocol to transfer the data.
|
need to implement versioning in Online backup tool
|
As the error states, you can either do a filegroup backup or you can bring the full text data catalog online. You can identify the location of the fulltext catalog (or at least, where it's supposed to be), using the following:
SELECT sf.filename
FROM sys.fulltext_catalogs ftc
JOIN sys.sysfiles sf ON ftc.[file_id] = sf.fileid
If your catalog isn't there, perhaps it was deleted. You could manually recreate it, or there is likely a utility within MSCRM to rebuild it - you may need to contact your reseller for help with that though.
|
My organisation uses Microsoft CRM 3.0, and I am attempting to backup the database. The following error is preventing me from doing so, does anyone know how to resolve this issue?
Error:
System.Data.SqlClient.SqlError: The backup of full-text catalog 'ftcat_documentindex' is not permitted because it is not online. Check errorlog file for the reason that full-text catalog became offline and bring it online. Or BACKUP can be performed by using the FILEGROUP or FILE clauses to restrict the selection to include only online data. (Microsoft.SqlServer.Smo)
For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=9.00.4035.00&LinkId=20476
|
How do you backup CRM3.0 when the 'ftcat_documentindex' is offline?
|
Depends on which OS etc. etc. but in most cases what you can do is copy to a temporary file name and as the last final step rename the files to the correct name.
This means the (WOOPS) Window of Opertunity Of Potential S****p is confined to the interval when the renames take place.
If the OS supports a nice directory structure and you lay out the files intelligently you can further refine this by copying the new files to a temp directory and renaming the directory so the WOOPS becomes the interval between "rename target to save" and "rename temp to target".
This gets even better if the OS supports Soft link directories then you can "ln -s target temp". On most OSes replacing a softlink will be an "atomic" operation which will work or not work without any messy halfway states.
All these options depend on having enough storage to keep a complete old and new copy on the file system.
|
Me and my colleague are trying to implement a mechanism to get recovery from broken files on an embedded equipment.
This could be happened during certain circumstances, e.g. user takes off the battery during file writing.
Orz, but now we have just one idea:
Create duplicated backup files, and copy them back if dangerous file i/o is not finished properly.
This is kind of stupid, as if the backup files also broken, we are just dead.
Do you have any suggestions or good articles on this?
Thanks in advance.
|
The strategy to get recovery from broken files?
|
1
Take a look at http://msdn.microsoft.com/en-us/library/ms186865.aspx for the BACKUP command. Did you try using INIT option?
I noticed the question is very old, but maybe my answer can help somenone.
Share
Improve this answer
Follow
edited Aug 26, 2011 at 5:54
Jérôme Verstrynge
58.5k9393 gold badges288288 silver badges460460 bronze badges
answered Nov 30, 2010 at 14:10
Emre SeckinEmre Seckin
1111 bronze badge
Add a comment
|
|
I have an MSDE 2000 database backup file which is appending rather than deleting or renaming. I am using this command:
BACKUP DATABASE [SPSDB] TO DISK = 'E:\Program Files\Microsoft SQL Server\MSSQL\BACKUP\SPSbackup\spsdb.bak' with retaindays = 1
I am using a maintenance plan on my full SQL version databases, and they create a new file everyday with the date in the file name.
The backup file size creeps up on me if I don't monitor it. Is there a way to have MSDE make a uniue file with the daily backup job I created?
Thanks,
Chad
|
MSDE backup file is appending
|
You can run these scripts the same way you run a query, only you don't connect to the database you want to restore, you connect to master instead.
|
I am developing a small business application which uses Sqlserver 2005 database.
Platform: .Net framework 3.5;
Application type: windows application;
Language: C#
Question:
I need to take and restore the backup from my application. I have the required script generated from SSME.
How do I run that particular script (or scripts) from my winform application?
|
How do I run that database backup and restore scripts from my winform application?
|
1
Do you mean that the application is storing a files as blobs in the MySQL database, and/or creating lots of temporary tables? Or that you just want temporary files - themselves unrelated to a database - to be stored in MySQL as a backup?
I'm not sure that trying to use MySQL as an net-new intermediary for backups of files is a good idea. If the app already uses it, thats one thing, if not, MySQL isn't the right tool here.
Anyway. If you are interested in capturing a filesystem at point-in-time, the answer is to utilize LVM snapshots. You would likely have to rebuild your server to get your filesystems onto LVM, and have enough free storage there for as many snapshots as you think you'd need.
I would recommend having a new mount point just for this apps temporary files. If your MySQL tables are using InnoDB, a simple script to run mysqldump --single-transaction in the background, and then the lvm snapshot process, you could get these synced up to less then a second.
Share
Improve this answer
Follow
answered Dec 30, 2008 at 21:05
Jeff WarnicaJeff Warnica
77255 silver badges1111 bronze badges
Add a comment
|
|
The application that I am working on generates files dynamically with use. This makes backup and syncronization between staging,development and production a real big challenge. One way that we might get smooth solution (if feasable) is to have a script that at the moment of backing up the database can backup the dynamically generated files inside the database and in restore time can bring those file out of the database and in the filesystem again.
I am wondering if there are any available (pay or free) application that could be use as scripts to make this happen.
Basically if I have
/usr/share/appname/server/dynamicdir
/usr/share/appname/server/otherdir/etc/resource.file
Then taking the examples above and with the script put them on the mysql database.
Please let me know if you need more information.
|
How to bring files in a filesystem in/out MySQL DB?
|
1
Quick and dirty way on windows?
Shared folders / Robocopy / Scheduled tasks (or triggered by your app, for that matter)
Nicest way?
Cobian backup with an FTP server
Are you sure you want to develop your own stuff? Is it compulsory to lie within your app?
Share
Improve this answer
Follow
answered Oct 28, 2008 at 9:30
VinzzVinzz
3,98866 gold badges3737 silver badges5151 bronze badges
Add a comment
|
|
Can anyone point out what technologies would be best suited for an application that backs up data from clients to a server?
The client should choose folders to backup and schedule backups to a server
I would also be interested in how would you start developing/designing, how would you build in the shortest time possible a rudimentary version of the application.
|
Scheduled data backup from client to server
|
1
I would probably make an effort to avoid writing this code. It sounds like the kind of problem database replication was designed to solve. It would depend on criteria you don't communicate in your question, such as database engine in use, available transports, whether different locations updates would overlap each other, the design of the database as it relates to keys and unique indexes, etc.
Share
Improve this answer
Follow
answered Sep 24, 2008 at 3:04
user19113user19113
1,22811 gold badge99 silver badges1212 bronze badges
Add a comment
|
|
There are literally thousands of locations where an application is running (.net desktop app),and a requirement is to have the updates to their database (diff from the last upload) to be sent to a central server that will store all the locations data.
What options do I have in coming up with a solution?
One idea is to generate an XML file with all the rows since the last sych., and upload that to a file server that will then be imported into the main database.
Note: the changed data will be minimal since this process will run every few hours.
|
1000's of locations have a desktop application that need to upload diff's of their db to a central store
|
Sure sounds like firewall issues. Try stopping iptables, and running again. Also, RALUS can dump a log file - which may give some more to go on.
I use the older UNIX agent myself, which uses port 6101 IIRC - but I believe that the newer client uses tcp/10000 for control and 1024-65535 for transfer.
Last resort is to fire up a network sniffer. ;)
|
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
I'm trying to do a file system backup of a RedHat Enterprise Linux v4 server using Symantec Backup Exec 11d (Rev 7170). The backup server is Windows Server 2003.
I can browse the target server to create a selection list, and when I do a test run it completes successfully.
However, when I run a real backup, the job fails immediately during the "processing" phase with the error:
e000fe30 - A communications failure has occured.
I've tried opening ports (10000, 1025-9999), etc. But no joy. Any ideas?
|
Symantec Backup Exec 11d RALUS Communications Error [closed]
|
0
Well, it appears that those views were not really independent, but rather
calling each other in a multi-hyerarchy of layers.
Since (being careful) I started removing only the truly, really useless of them,
I broke one or two of the remaining ones. Now I'll have to reconstruct
the silly pyramid, and undo the thing layer by layer.
My bad....sorry for the noise
Share
Improve this answer
Follow
answered Mar 8 at 13:55
troppapolveretroppapolvere
1544 bronze badges
Add a comment
|
|
I have a mySQL database that I backup everyday like this:
/usr/bin/mysqldump --all-databases --events > "some_file_name"
This db contained a number of obsolete and useless views that I DROPped 2 days ago,
after which my backup started complaining:
mysqldump: Got error: 1356: View 'xyz' references invalid table(s) or column(s) or function(s) or definer/invoker of view lack rights to use them when using LOCK TABLES
Now I guess I could recover all the old views from a previous file, only it seems a
silly way to go.
Could somebody suggest how to solve this? I did search the Internet, but it mostly contain
issues of "how to reliably save and restore views", which is not my problem.
Thanks!
|
mysql broken after dropping useless views
|
0
The Following Morning
So, I slept on it and worked it out:
If %CD:~0,2%==E: Goto Doh!
Robocopy \ "E:\2TB Blue" /MIR /R:1 /W:1 /TEE /FFT /A-:SH
Goto End
:Doh!
@Echo The Source and Destination are on the SAME drive!
pause
:End
Share
Improve this answer
Follow
answered Feb 6 at 8:23
TrevitTrevit
111 bronze badge
Add a comment
|
|
I use this sort of command line in the batch files to backup data from my laptop to various external USB drives and also from one USB drive to another, using Robocopy:
Robocopy \ "D:\2TB Blue" /MIR /R:1 /W:1 /TEE /FFT /A-:SH
But, if I run that batch file on the same drive, e.g. D:, it'll dump a copy of D:\ to D:\2TB Blue, which is no use whatsoever.
Can anyone advise how to not run the batch file if the 'source' and 'destination' are on the same drive?
|
Robocopy backing up with a batch file, but not to the same drive?
|
You are receiving the error:
's3-pit-restore' is not recognized as an internal or external command, operable program or batch file.
This indicates that the Windows command-line cannot find a program called s3-pit-restore.
Therefore, try running:
python3 s3-pit-restore
If you were on Linux instead of Windows, you could chmod +x the script and then it would work fine because the first line of the file is:
#!/usr/bin/env python3
This tells Linux where to find the executable. This doesn't translate well to Windows.
|
Using project: https://github.com/angeloc/s3-pit-restore
S3-pit-restore seems like a useful point-in-time restore tool for Amazon S3. I followed the provided instructions to install this tool on Windows 10 Pro. Python and AWS CLI are running without any problems. s3-pit-restore files are residing in the same directory as Python, and the path is already set. It is not working for me.
Restoring AWS S3 bucket using s3-pit-restore discussed another issue. I am stuck on installation of this tool
The following steps are taken:
An AWS CLI profile was created and tested. It is working.
pip3 install s3-pit-restore
Customize the provided command according to my needs.
Error message on the command line:
's3-pit-restore' is not recognized as an internal or external command, operable program or batch file.
Have anyone run this code successfully on a Windows 10 environment? Please let me know with the exact steps. I am feeling that new version of this code will only run in a Linux environment.
Secondly, do you know any tool that can help achieve point-in-time file restoration from AWS S3?
Following Point-in-time restore for Amazon S3 buckets | AWS Storage Blog, I didn't get the desired result. I am still in the process of customizing it.
|
Use s3-pit-restore - not recongized as an internal or external command
|
0
in general Cassandra not provide such functionality of the box. You are right replication strategies applies per keyspace, so you need implement some external job for read and write data to another keyspace. Also cassandra not effective database for store timeseries on disk, so much more better have something like parquet file as cold storage. Usually this looks likes this:
Cold storage:
(Cassandra)--read_data--(Spark Job)--write--(S3 parquet)
Cold storage restore:
(S3 parquet)--read_data--(Spark Job)--write--(Cassandra)
Cassandra delete data job should be developed separately because it's also not trivial task. After data delete you should execute on each node nodetool cleanup.
Share
Improve this answer
Follow
answered Nov 20, 2023 at 19:30
Mark AllenMark Allen
1
Add a comment
|
|
I'm thinking in the manner of storying iot telemetry data.
I'd like to optimize my storage. In this case let us take iot telemetry as an example. I'd like to keep recent data (eg last 6 months) as hot and highly replicated. For older data I'd like to reduce replicas and/or fully offload to a lower performance archive cluster.
I know keyspace based replication strategies. However this would mean I'd require multiple keyspaces. However I'd like to rather have the replication based on the primary-key / shard-key based solution.
Is it possilbe to define replication strategy based on data age or any other property?
If yes, how can this be achieved?
Thanks a lot in advance for your expertise.
|
Cassandra Replication Strategy based on Primary Key for Archiving Data Old Data
|
0
I resolved the issue by replacing /usr/sbin/sendmail -t with /bin/mailx. For more information (and with due credit to "HBruijn"), see the following link: https://serverfault.com/questions/1147855/amanda-backup-software-using-postfix-generates-invalid-sendmail-option-s
Share
Improve this answer
Follow
answered Nov 14, 2023 at 20:59
LeslieLeslie
61844 silver badges1414 bronze badges
Add a comment
|
|
I recently updated postfix on my CentOS 6 server to support a new email relay, and everything is working except my Amanda backup software, which is no longer sending status emails. When I run "amreport" on the command line, it gives me these errors:
sendmail: invalid option -- 's'
sendmail: invalid option -- 's'
sendmail: fatal: usage: sendmail [options]
amreport: mail command exited with status 75
However, in my amanda.conf file, I have this
mailer "/usr/sbin/sendmail -t"
Where is Amanda and/or postfix getting the "s" option? Does anyone know how to fix this? Thanks in advance to all who respond.
|
Amanda backup software using postfix generates invalid sendmail option '-s'
|
0
Because JFrog Artifactory has Cleanup Unused Cached Artifacts in the Maintenance part and based on cron expression, it deletes packages.
Share
Improve this answer
Follow
answered Nov 12, 2023 at 16:18
Amirhossein TaheriAmirhossein Taheri
25144 silver badges1414 bronze badges
Add a comment
|
|
I recently lost a package in my artifactory remote repository because of the package itself disappeared from pypu.org as I didn't know that artifactory remote cache works as a proxy cache only. So I need some detailed setups on how to avoid this happening again in the future.
I looked into the official docs and a lot of stuff related to backups in the UI never found it in my account administration.
|
Remote repository package went missing after package removed from pypi.org
|
The simple solution is to backup the entire JENKINS_HOME folder. In case you need it for disaster recovery, just copy the whole thing back in. There is a number of files inside the JENKINS_HOME folder that are important to backup, such as the jobs folder which holds configuration of all the jobs, as well as a number of files that don't require backing up. If you want to go into detail, the official docs can give you the specifics of what needs backing up.
|
I'm planning to practice jenkins. I want to set up 2 jenkins server on ec2 instance and one is production server another is backup. I want the jobs in the production server automatically backed up to the backup server. In case of an disaster I want to restore it to the production server. Can anyone help me with how can I implement this in realtime
I plan to connect them with ssh and run a script in backup that takes the jobs from production server using build triggers.
|
backup and restore to jenkins in case of disaster
|
No way. GitLab should be updated to version 14.9.
|
I want to create incremental backup from gitlab version 14.7.1
It is written in part of Git documentation that support is only possible in versions 14.9 and later.
how can i do ?
this commands not work:
sudo gitlab-backup create INCREMENTAL=yes PREVIOUS_BACKUP=<timestamp_of_backup>
sudo gitlab-backup create INCREMENTAL=yes BACKUP=<timestamp_of_backup>
|
how to create incremental backup in GitLab Enterprise Edition 14.7.1-ee
|
0
I found the answer:
AZ command:
az dataprotection backup-instance list-from-resourcegraph --datasource-type AzureBlob --datasource-id
Share
Improve this answer
Follow
answered Oct 13, 2023 at 15:28
DiggerDigger
6777 bronze badges
Add a comment
|
|
I'm working on an automation project where I need to find the backup vaults configured for blob container and file share. I searched Microsoft documents but couldn't find anything to achieve my goal. Could anyone please help?
Thanks.
|
How to find backup vaults for an Azure blob container and file share using AZ CLI
|
You need to use PgAdmin to do the restore when you backup the database with PgAdmin.
Steps to restore database on PyAdmin
Create a new database
Right click the database, select restore
Choose the backup file
Data/Objects Tab -> Do not Save -> Owner
Options Tab -> Queries -> Clean
Before restore
Restore
Make sure you have copy the filestore, and rename the folder same as the restored database name.
|
How can I backup and restore an Odoo database using pgadmin?
To backup, I right clicked on the database, backup, use .tar format.
When I try to restore, Odoo says: failed restored database.
The database is there, but when I'm logged in, it shows internal error.
|
Internal Error when Restoring Odoo Database from PgAdmin4
|
0
The error you encountered indicates that the managed identity associated with the policy assignment does not have the required permissions to remediate the resources. To remediate the resources, the managed identity needs to be assigned the minimum RBAC role required for resource remediation.
I followed the steps below to assign the 'Configure backup on virtual machines with a given tag to an existing recovery services vault in the same location' policy to a specific scope
I assigned the Resource Policy Contributor role to user at the subscription level
I assigned the policy to the subscription scope and specified all the required parameters as follows.
I selected Create a remediation task and created a managed identity under the Remediation section
After filling in all the details, click on create to assign the policy.
The policy assignment with a remediation task was created successfully as below.
Reference: Fix and cleanup MSI role assignments for policy assignments at the management group level & Stack link by AjayKumarGhose
Share
Improve this answer
Follow
answered Sep 29, 2023 at 7:40
Venkat VVenkat V
4,09411 gold badge22 silver badges1313 bronze badges
Recognized by Microsoft Azure Collective
1
Thank you for this information. I found out what my issue was. I did not have the correct permission in the subscription I was trying to assign the policy to. I was missing the permission to create a managed identity.
– SRoberts
Sep 29, 2023 at 12:56
Add a comment
|
|
I am using the policy listed below. The roles are set for VM Contributor and Backup Contributor. When performing a remediation, I get the following warning message, "The managed identity for this assignment does not have the appropriate permissions to remediate these resources. To add these permissions, go to the Edit Assignment page for this Policy and re-save it." I have not altered the policy from its builtin policy. I have made sure to set the scope, tag name and value, and recovery service vault & policy.
Policy Name: [Configure backup on virtual machines with a given tag to an existing recovery services vault in the same location]
I have tried remediating the policy, but am given the same error. I have checked the policy definition and assignment to double check everything.
|
Policy is not deploying because managed identity does not have permissions
|
0
I was having a similar issue and I added "TF_LOG=Debug" before my "terrafom apply"
TF_LOG=Debug Terraform apply"
And seeing the logs I was missing IAM permissions:
"backup:CreateBackupVault",
"backup:PutBackupVaultAccessPolicy",
"backup:PutBackupVaultNotifications",
"backup:TagResource",
"backup:DeleteBackupPlan",
"backup:DeleteBackupSelection",
"backup:DescribeBackupVault"
"backup:DeleteBackupVault"
"backup:CreateBackupPlan"
"backup:CreateBackupSelection"
Share
Improve this answer
Follow
edited Sep 19, 2023 at 15:39
answered Sep 19, 2023 at 15:39
NFrancoNFranco
111 bronze badge
1
As it’s currently written, your answer is unclear. Please edit to add additional details that will help others understand how this addresses the question asked. You can find more information on how to write good answers in the help center.
– Community
Bot
Sep 22, 2023 at 9:19
Add a comment
|
|
Im trying to create a awa backup plan via terraform
For that i have code creating aws backup vaults.
While terraform apply im getting this error even though the vaults are getting created on the console
Error: reading Backup Vault (AWSTargetBackupVault): couldn't find resource
Error: reading Backup Vault (AWSTargetBackupVault): couldn't find resource module.build.aws_backup_vault.target_vault[0],
on • ./ . ./. ./modules/aws_build/image_table.tf line
in resource "aws_backup_vault" "target_v resource "aws_backup_vault" "target_vault" (“
I tried adding depends_on in the aws backup plan
But that didnt do anything
I trying to create backup cross region
So going to use two backup vaults
These errors come while creating the vaults and backup plan for the first time.
|
Error: reading Backup Vault (AWSTargetBackupVault): couldn't find resource….. getting this erro while creating backup vaullts and plan via terraform
|
0
So all I had to do was iterate over the list of users and write each user's file to the outputstream file like this:
public void backupDatabase() throws IOException {
File backupFile = new File(Environment.getExternalStorageDirectory(), AppDatabase.DATABASE_NAME + ".txt");
if (!backupFile.exists()) {
try (FileOutputStream outputStream = new FileOutputStream(backupFile)) {
}
} else {
if (!backupFile.canWrite()) {
throw new FileNotFoundException("User does not have permission to write to file " + backupFile.getAbsolutePath());
}
}
AppDatabase database = Room.databaseBuilder(getApplicationContext(), AppDatabase.class,"InnBucks_DB").allowMainThreadQueries().build();
UserDao userDao = database.userDao();
List<User> userList = userDao.getallusers();
try (FileOutputStream outputStream = new FileOutputStream(backupFile)) {
for (User user : userList) {
outputStream.write((user.getId() + "\n").getBytes());
outputStream.write((user.getUserId() + "\n").getBytes());
outputStream.write((user.getName() + "\n").getBytes());
}
}
Toast.makeText(BackupRestore.this, "Backup is successful to SD card", Toast.LENGTH_SHORT).show();
}
Share
Improve this answer
Follow
answered Sep 14, 2023 at 9:51
Anesu MazvimaviAnesu Mazvimavi
14711 silver badge1010 bronze badges
Add a comment
|
|
I have a method in my program that backs up a room database in android studio. When I export the file, it comes out as a .txt file. However in my file, the data is all scribbly and not well structured as shown in the image below. I just want the text file to show only data from the room database table columns.
My method to backup the room database
public void backupDatabase() throws IOException {
File backupFile = new File(Environment.getExternalStorageDirectory(), AppDatabase.DATABASE_NAME + ".txt");
if (!backupFile.exists()) {
try (FileOutputStream outputStream = new FileOutputStream(backupFile)) {
}
} else {
if (!backupFile.canWrite()) {
throw new FileNotFoundException("User does not have permission to write to file " + backupFile.getAbsolutePath());
}
}
try (FileInputStream inputStream = new FileInputStream(context.getDatabasePath(AppDatabase.DATABASE_NAME));
FileOutputStream outputStream = new FileOutputStream(backupFile)) {
byte[] buffer = new byte[1024];
int bytesRead;
while ((bytesRead = inputStream.read(buffer)) != -1) {
outputStream.write(buffer, 0, bytesRead);
}
}
Toast.makeText(BackupRestore.this, "Backup is successful to SD card", Toast.LENGTH_SHORT).show();
}
The .txt file does show the database in the middle, but the sqlite format stuff is unnecessary
And this is the database content which is what I am trying to show
|
Roomdatabase backup file showing roommaster table scribbly text
|
0
You seem confused about pg_dump and how it operates. I suggest reading the online manuals.
https://www.postgresql.org/docs/current/app-pgdump.html
Share
Improve this answer
Follow
answered Sep 11, 2023 at 11:07
Richard HuxtonRichard Huxton
22k33 gold badges4141 silver badges5252 bronze badges
Add a comment
|
|
Except for pg_dump What else can I do to backup a postgreSQL database.Pg_dump shoul run in postgresql server,but i only konw the database connection info.
I try to use sql to do this.But there are to much things to backeup,such as tables,squences,function and so on .so i want to known is there any other thing to help me to backup postgresql.
|
Except for pg_dump What else can I do to backup a postgreSQL database
|
0
.bacpac files are created for the entire database and MS SQL Server does not allow to selectively import data or schema from a .bacpac file into an existing database
If you are trying to import , and a new database is being created there might be name mismatch otherwise it gives DB already exist with same name error
Share
Improve this answer
Follow
answered Aug 22, 2023 at 10:28
harshityadav95harshityadav95
5377 bronze badges
Add a comment
|
|
can we create a new bacpac file without creating a new DB in azure
I tried creating it in the existing DB.
It didn't work , a new DB is being created.
I want the bacpac on the existing DB............................................
|
Can we import SQL bacpac file in existing DB (azure)
|
0
Turns out an inactive replication slot was responsble for space growth... It was not locking anything, just sitting there - and it was causing log space to grow exponentially each day.
Dropping replication slot (logical) was the solution to reclaiming space. Within 10-15 minutes, high watermark reset to normal.
Share
Improve this answer
Follow
answered Aug 17, 2023 at 16:32
Alex VAlex V
31522 silver badges77 bronze badges
Add a comment
|
|
We have an Azure database for Postgres Flexible Server option. Server has been allocated to 512GB, and total size of all databases on the server is about 110G. We had online backups set up for the maximum allowed retention period of 35 days. This had started on July 17th, 2023. Since then, space used on the server has been growing at a fairly steady rate and monitoring showed it approaching 95% used. Instead of upscaling the Azure server to the next size (1TB), we reduced the online backup retention to 8 days (we are also doing offline daily backups, so point in time recovery will still be possible for 8 days, but a catastrophic failure/research recovery would still be available for much longer periods of time).
However, space used on the server continues to creep up. Granted, 35 days has not yet passed and old backups have not begin to fall off the tail end, but with an 8-day retention, shouldn't the backup window just roll on itself and space should no longer go up?
Our databases do not grow at 1% per day, yet storage seems to do.
Questions:
at which point would storage stop growing, given that retention is now only 8 days instead of 35?
Is there a way to completely drop the old (8+ - 35 days) bvackups, such that they no longer consume the storage? On the backup/restore page it only shows the last 8 backups (as expected), with no ability to see older ones.
Thank you in advance!
|
Reclaim storage from Azure database for Postgres
|
0
I need to filter using date time for the job report.In my environment jobs are scheduled like yesterday 10pm to today 10pm
To pull Azure backup job list using the Azure CLI az backup job list command for Azure Backup Server and filter the job report based on a specific Date and Time range, you can use below PowerShell command.
az backup job list --resource-group "RG-Name" --vault-name "Vault-Name" --end-date 17-07-2023-13:28:47 --start-date 16-07-2023-10:28:47
Output:
If you want to run the script from automation, kindly authenticate using a service principal and assign the required role to the service principal
$tenantId = "650cxxxxxxx-a944-627017451367"
$appId = "e47444ef-xxxxxx-a7f0-83dbab1feb64"
$appSecret = "yhn8Q~xxxxxxxxxxHSEfiQNRH3ypHybNN"
$secureAppSecret = ConvertTo-SecureString -String $appSecret -AsPlainText -Force
$cred = New-Object -TypeName "System.Management.Automation.PSCredential" -ArgumentList $appId, $secureAppSecret
Connect-AzAccount -ServicePrincipal -Credential $cred -TenantId $tenantId
$Recoverydetails = az backup job list --resource-group "RG-name" --vault-name "vaultname" --end-date "17-07-2023-13:28:47" --start-date "16-07-2023-10:28:47" | ConvertFrom-Json
$Recoverydetails | Export-Csv -Path "/home/user/recovertreport.csv" -NoTypeInformation
The above script will authenticate using a service principal and generate the Azure backup report to Excel
Note: Modify the command as per your requirement
Add the same script to the Task Scheduler to run the script every day.
Reference: List all backup jobs of a Recovery Services vault.
Share
Improve this answer
Follow
edited Jul 18, 2023 at 8:32
answered Jul 17, 2023 at 9:24
Venkat VVenkat V
4,09411 gold badge22 silver badges1313 bronze badges
Recognized by Microsoft Azure Collective
Add a comment
|
|
I am trying to pull azure backup report using recovery service vault. I am using az backup job list but not able to pull backup job list for mabs
I am trying to pull azure backup report using recovery service vault. I am using az backup job list but not able to pull backup job list for mabs
Also, I need to filter using date time for the job report.In my environment jobs are scheduled like yesterday 10pm to today 10pm..
Please help me with the command
|
Azure backup report using powershell
|
Thanks for your guess.
You're wright! The URL is mistaken while using in Postman.
And then If I send the request again with the correct URL, I got the below message:
"code": "ServiceLocked",
"message": "The API Service assist-prod-us-ni25-gateway is transitioning at this time. Please try the request again later.
After sometime, I got the backup file in the given Storage account container area.
|
Using this Azure Management REST API for backing up the APIM Instance data but getting the below error:
"message": "No HTTP resource was found that matches the request URI
Passed the Authorization token and the backup parameters in the body correctly but not sure what is the cause of this error.
The APIM Instance has plenty of APIs with the GET, POST, PUT, DEL Operations.
|
"message": "No HTTP resource was found that matches the request URI while sending the POST request for backup in APIM
|
0
The feature that Microsoft provides is a "Soft Delete" option, it's not a "Backup" feature of the workspace.
After a delete, you can recover the workspace until 14 days after the delete action. After that the data will get purged. (As also described in the article you shared)
To backup your data, there are a few options to duplicate your data to other destinations:
When configuring the Diagnostic Settings of the different sources, you can specify multiple destinations, either a secondary Log Analytics Workspace or a Storage Account or streaming it to other places using Event Hub.
You can configure a data export rule on the Log Analytics Workspace to export the selected tables to a Storage Account or streaming it to other places using Event Hub.
Export data from a Log Analytics workspace to a storage account by using Logic Apps
Schedule export of data based on a log query you define with the Log Analytics query API. Use Azure Data Factory, Azure Functions, or Azure Logic Apps to orchestrate queries in your workspace and export data to a destination.
One-time export to a local machine by using a PowerShell script.
Share
Improve this answer
Follow
answered Jun 20, 2023 at 6:30
PhilipPhilip
64711 silver badge1111 bronze badges
Add a comment
|
|
I'm just wondering how Azure deal with backing up the Log Analyticsworkspaces.
I just want to design a log audit to collect logs from different resources and have them in the LA workspaces.However, I want to be sure if there is a recovery plan by default by Azure or If there is way that I can design a backup in azure to backup the LA workspaces.
Microsoft azure promises to recover the deleted workspaces within 14 days but this feature does not fit to my scenario. I need more time period that I'm confident on to recover the backup data.For example two years or more.
Any idea?
|
How to backup Log Analytics workspace or recover?
|
0
I am trying to get the backup schedule and retention for SQL Servers using Azure Resource Graph Explorer. I am unable to find any such information.
The backup schedule and retention information for SQL Servers and databases is not stored in the resources table. However, you can use Azure Resource Graph Explorer to query information on backup for your Azure resources.
The RecoveryServicesResources table contains most of the backup-related records, such as job details, backup instance details You can use the following Kusto query to list all backup policies used for Azure SQL Servers:
RecoveryServicesResources | where type == 'microsoft.recoveryservices/vaults/backuppolicies'
This query will return all backup policies. However, it will not provide you with the backup schedule and retention information for Azure SQL Server, you can use below PowerShell cmdlets to get this information.
Get-AzSqlDatabaseBackupShortTermRetentionPolicy
Get-AzSqlDatabaseBackupLongTermRetentionPolicy
To retrieve the backup schedule and retention information directly from Azure Resource Graph Explorer is not currently supported.
Share
Improve this answer
Follow
answered Jun 19, 2023 at 10:41
Venkat VVenkat V
4,09411 gold badge22 silver badges1313 bronze badges
Recognized by Microsoft Azure Collective
Add a comment
|
|
I am trying to get the backup schedule and retention for SQL Servers using Azure Resource Graph Explorer. I am unable to find any such information.
The current query that I have is
resources
| where type == 'microsoft.sql/servers'
I am trying to get the short term and long term backup information for SQL Servers and databases but it doesn't seem to exist in the resources table. I am not sure where this information is stored. I couldn't find anything in recoveryservicesresources table related to PaaS instance of SQL Server backups.
Any ideas?
PS: I am trying to do this using KQL. I know it is possible using PowerShell using 'Get-AzSqlDatabaseBackupShortTermRetentionPolicy' and 'Get-AzSqlDatabaseBackupLongTermRetentionPolicy'.
|
Get backup schedule and retention (short and long term) for SQL Servers using Azure Resource Graph Explorer
|
0
Just find out that these failed it was produced due to Acronis Backups Tool.
I'm not sure how it works but it seems that Acronis uses shadow copy as well and it blocks the possibility to create differential backups.
That's why only allow you to perform one backup but not to schedule new ones over it.
There are two solutions in case, use only one of these tools to perform backups or if you want to use both for any particular reason, write an script to move and delete old backups, every time you should generate a new one that it would take much more time to complete than using differential backups but it works.
Share
Improve this answer
Follow
answered Jul 27, 2023 at 10:38
JerryHCJerryHC
122 bronze badges
Add a comment
|
|
I am using Windows Backup and creating a system state backup. First time it works, but during next backups I have the following error.
Error in deletion of [C:\System Volume Information\MasterFileStatus.db] while pruning the target VHD. Error [0x80070020] The process cannot access the file because it is being used by another process.
Has anybody face this error and know how to resolved?
I've been trying things like disabling antivirus, recreating volume, different shadow copy backup options, but now I'm running out of ideas.
|
Windows Backup error 0x80070020 MasterFileStatus.db
|
Your code will only handle directory copying to a new location. See the documentation of Files.copy:
throws DirectoryNotEmptyException - the REPLACE_EXISTING option is
specified but the file cannot be replaced because it is a non-empty
directory
Therefore it is unnecessary to copy a directory that exists, just prefix copy line by checking !isDirectory() beforehand.
if (!Files.isDirectory(destination)) {
System.out.format("Files.copy(%s, %s)%n",source, destination);
Files.copy(source, destination, StandardCopyOption.REPLACE_EXISTING);
}
It is also unnecessary to test for srcFileOrDir.isFile() as Files.walk works for file or directory. Put that all together with exception handling, no path splitting, and clean new Path only method:
private static void copyToDirectory(Path src, Path destDir) throws IOException {
final Path dest = destDir.resolve(src.getFileName());
System.out.format("copying %s => %s%n",src, dest);
Files.walk(src).forEach(source -> {
Path destination = dest.resolve(src.relativize(source));
try {
if (!Files.isDirectory(destination)) {
System.out.format("Files.copy(%s, %s)%n",source, destination);
Files.copy(source, destination, StandardCopyOption.REPLACE_EXISTING);
}
} catch (IOException e) {
throw new UncheckedIOException(e);
}
});
}
|
I wrote a program, where you can backup file or directory to chosen folder. Here is piece of my code:
try {
if (srcFileOrDir.isFile()) {
File destFile = new File(destDir.getAbsolutePath() + File.separator + srcFileOrDir.getName());
Files.copy(srcFileOrDir.toPath(), destFile.toPath(), StandardCopyOption.REPLACE_EXISTING);
}
else { //isDirectory
String[] pathParts = srcFileOrDir.getAbsolutePath().split("\\\\");
destDir = new File(destDir.getAbsolutePath() + File.separator + pathParts[pathParts.length-1]);
Files.walk(srcFileOrDir.toPath()).forEach(source -> {
Path destination = destDir.toPath().resolve(srcFileOrDir.toPath().relativize(source));
try {
Files.copy(source, destination, StandardCopyOption.REPLACE_EXISTING);
} catch (IOException e) {
e.printStackTrace();
}
});
}
JOptionPane.showMessageDialog(null, "Backup was created successfully at " + destDir.getAbsolutePath(), "Success!", JOptionPane.INFORMATION_MESSAGE);
}
catch (IOException e) {
e.printStackTrace();
}
The problem is that program throws DirecroryNotEmptyException when i am trying to execute Files.copy(). The exception is thrown only if i am trying to backup directory. If i am backuping file everything works normal.
I used REPLACE_EXISTING option, but my program ignores it. Thanks in advance.
|
DirectoryNotEmptyException while copying files
|
0
Run kubectl auth can-i logs po. If the response is "no" then you don't have permission to read container logs. You can do a similar thing with Velero resources (CRDs), e.g. kubectl auth can-i get <CRD>. Ask your cluster administrator to grant you the appropriate access.
Share
Improve this answer
Follow
answered May 12, 2023 at 17:32
Jeremy CowanJeremy Cowan
71155 silver badges1313 bronze badges
3
Hi, Jeremy Cowan thank you for replying. I am able to access pod logs. I am not able to access velero backup logs.
– TechGirl
May 15, 2023 at 14:53
Do you see errors in the Velero pod logs? Velero stores its backup logs in S3 so if you can't retrieve them from the Velero CLI, you should be able to get them directly from S3.
– Jeremy Cowan
May 15, 2023 at 21:33
I can view the log files stored in S3 but unable to view resources backup by velero using velero CLI.
– TechGirl
May 17, 2023 at 9:36
Add a comment
|
|
Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 days ago.
Improve this question
I am unable to view backup logs. I get "Access denied" when I run this command "velero backup logs ". I am also unable to view the backup resource list. Let me know what needs to be done here. I am using Velero-plugin-for-aws.
I really appreciate any help that can be provided.
|
Access denied when accessing velero backup logs [closed]
|
0
I tried your scenario in my environment, and it worked successfully.
Here, LTR backups when server exists:
Here, LTR backups when server is deleted:
As you are not getting any error and when you run the command it does not show the LTR backups. It happens when the LTR is getting deleted after its retention period in this case you don't get any error, and nothing cones after command execution. please once check your retention policy.
Share
Improve this answer
Follow
answered May 11, 2023 at 5:43
Pratik LadPratik Lad
6,30522 gold badges44 silver badges1313 bronze badges
Recognized by Microsoft Azure Collective
1
What sku did you use? And what was the LTR settings? I tried 4 Weeks as we do not need to save for a long time. Just have the extra security if the server happens to get deleted. I also run the Get-AzSqlDatabaseLongTermRetentionBackup more or less immediately after the server was removed, so I should have been able to see some backups...
– Tommy Selggren
May 12, 2023 at 3:52
Add a comment
|
|
I have set up LTR for an Azure SQL Database and running the Get-AzSqlDatabaseLongTermRetentionBackup -Location northeurope I can see that I have one backup.
After I have deleted the SQL Server and running Get-AzSqlDatabaseLongTermRetentionBackup -Location northeurope again, I can't see the LTR backup. According to https://www.mssqltips.com/sqlservertip/6443/how-to-restore-azure-sql-ltr-backup-after-azure-sql-instance-deleted/ this should be possible.
So what am I doing wrong?
SKU is Basic and DTU i 5...
|
Cannot find LTR backups for Azure SQL Database after the server is deleted
|
0
I was getting the same error but I was able to make backups in Google drive using: masbug/flysystem-google-drive-ext
The usage and configuration is almost the same as pon/flysystem-google-drive.
There is an error with this package tho, you have to set the backup name to "" in the spatie backup config to make it work.
Share
Improve this answer
Follow
answered Jun 5, 2023 at 3:59
Luis AguileraLuis Aguilera
122 bronze badges
Add a comment
|
|
Im trying to setup a backup for my Laravel project using Spatie laravel Backup package but i cant install Flysystem adapter for Google Drive i get the error
`- Root composer.json requires nao-pon/flysystem-google-drive ~1.1 -> satisfiable by nao-pon/flysystem-google-drive[1.1.0, ..., 1.1.13].
- nao-pon/flysystem-google-drive[1.1.0, ..., 1.1.13] require league/flysystem ~1.0 -> found league/flysystem[1.0.0, ..., 1.1.10] but the package is fixed to 3.14.0 (lock file version) by a partial update and that version does not match. Make sure you list it as an argument for the update command.
Use the option --with-all-dependencies (-W) to allow upgrades, downgrades and removals for packages currently locked to specific versions.`
|
I cant configure google drive backup onmy Laravel Project
|
0
Exists a tool that use rsync and organize backup folder into catalog for easy future use...
The tool is Butterfly Backup. I find this article (and also its documentation) that explain how works: https://fedoramagazine.org/butterfly-backup/
Docs: https://github.com/MatteoGuadrini/Butterfly-Backup
Simple use:
bb backup --computer pc1 --destination /nas/mybackup --data User Config --type MacOS --mode Full
And catalog is:
bb list --catalog /nas/mybackup
...
BUTTERFLY BACKUP CATALOG
Backup id: f65e5afe-9734-11e8-b0bb-005056a664e0
Hostname or ip: pc1
Timestamp: 2018-08-03 17:50:36
Backup id: 4f2b5f6e-9939-11e8-9ab6-005056a664e0
Hostname or ip: pc1
Timestamp: 2018-08-06 07:26:46
Backup id: cc6e2744-9944-11e8-b82a-005056a664e0
Hostname or ip: pc1
Timestamp: 2018-08-06 08:49:00
Share
Improve this answer
Follow
answered Aug 4, 2023 at 9:25
teroeasyfiyteroeasyfiy
1
Add a comment
|
|
I am trying to implement custom function for backup using rsync. For this, I modified the following exist code https://linuxconfig.org/how-to-create-incremental-backups-using-rsync-on-linux as follows:
#!/bin/zsh
#https://linuxconfig.org/how-to-create-incremental-backups-using-rsync-on-linux
# A script to perform incremental backups using rsync
set -o errexit
set -o nounset
set -o pipefail
incremental_bckp_rsync_dir(){
readonly SOURCE_DIR="/$1"
readonly BACKUP_DIR="/$2/$1"
readonly DATETIME="$(date '+%Y-%m-%d_%H:%M:%S')"
readonly BACKUP_PATH="${BACKUP_DIR}/${DATETIME}"
readonly LATEST_LINK="${BACKUP_DIR}/latest"
mkdir -p "${BACKUP_DIR}"
rsync -av --delete \
"${SOURCE_DIR}/" \
--link-dest "${LATEST_LINK}" \
--exclude=".cache" \
"${BACKUP_PATH}"
rm -rf "${LATEST_LINK}"
ln -s "${BACKUP_PATH}" "${LATEST_LINK}"
}
incremental_bckp_rsync_dir path/to/dir path/to/backup
It does successfully a backup of the dir, however the size of the backup dir (obtained with the command du -h path/to/backup) seems to double every time I run the script (which means that it is not incremental, from what I understand. Is there a way to fix it?
|
How to write an incremental backup script using rsync
|
0
No. There is no way to do this from an EC2 or VM snapshot. The daily backup is only the full backup. There's already a differential/incremental backup taken every 2 hours (it used to be every hour, but this was too frequent for larger clusters to complete in time before the next increment).
The only scenario you can change the existing IP is a restore from backup scenario (yes, the built-in backup). And, this would involve running the process here: https://www.dynatrace.com/support/help/shortlink/managed-cluster-restore#restore-from-backup
Noting, that if you were to use a snapshot to restore the VM then you would first have to uninstall Dynatrace from that replaced VM anyway, before running the restore. This implies deleting all data as part of the uninstall. Therefore, your attempts to use snapshotting for this would be fruitless.
This process also takes many hours on a multi-node cluster setup with lots of existing data. You must first stop any running nodes in the cluster, because changing the IP of even just one node, you need to inform all other nodes of the new locations (IP addresses) of other nodes through the restore mode. It's also why it's only really recommended to use it during a full cluster loss (such as data centre loss).
If you are running a multi-node cluster of at least 5 nodes, and your concern for your backup & restore to a new VM with new IP is really only for a situation where you lose 1 VM, you are probably better of to simply add a new node through replication and manually cleanup/remove the old node. Better yet, set up rack-awareness for the cluster to make it more resilient for 9 nodes or more.
Share
Improve this answer
Follow
answered May 3, 2023 at 12:05
The_AMThe_AM
1111 bronze badge
Add a comment
|
|
I've been looking to see if there's a way to backup my dynatrace managed install by taking snapshots of the EC2's volumes.
However, it seems that when I try to bring up the cluster from the snapshots, it goes haywire since all the private IP addresses have changed, and the cluster and servers can't function. I've tried manually hacking through the config files and changing all the old IP addresses to the new ones. After enough hacking I got it to a point where the console would at least let me log in, but when looking at the cluster through the console it still had the old IP addresses, and showed errors with all nodes in the cluster. I think this guy did something similar: https://community.dynatrace.com/t5/Dynatrace-Open-Q-A/How-can-we-change-IP-address-of-Dynatrace-Managed-Cluster-server/m-p/85423/highlight/true#M5176
The main reason I am trying to create my own system for making backups with snapshots is because the default backups only let you do daily backups, and I would like to see if I could do backups more often.
Is there a way to update Dynatrace Managed's Cluster IP Addresses via ssh? I'm looking for a way by ssh into each cluster node, fixing the IP addresses, then restarting the dynatrace node.
|
How to update cluster's IP addresses in Dynatrace Managed?
|
0
Your backups are online, not on your phone.
See: https://support.google.com/drive/answer/6305834?hl=en&co=GENIE.Platform%3DDesktop
If the PIN you refer to was to unlock your device, this is not what you need. You need to log into your Google account (<probably your name>@gmail.com) and use the password for that account.
Share
Improve this answer
Follow
answered Mar 29, 2023 at 16:09
EselfarEselfar
3,82933 gold badges2424 silver badges4444 bronze badges
1
I am referring to the pin. I need the pin to restore my back up
– Anurag Thapa
Mar 29, 2023 at 22:25
Add a comment
|
|
I recently had to format my device If there is a way possible to get the last password form my device as i need to restore the data from the google backup. It was a 4 digit (device)pin and everytime to try to cd it shows a error of /system/bin/sh: cd/system: inaccessible or not found I have the device with me and unlocked but there is no way to recover what ever the contacts and other things i had.
|
Is there any way to recover my old password from android device using adb?
|
0
Note that a collection and a database are not the same thing. The correct command is:
mongodump --uri mongodb+srv://<your_server_url>/<db_name> --collection <collection_name> -o . -v
Share
Improve this answer
Follow
answered May 30, 2023 at 7:22
HelgueraHelguera
8511 silver badge88 bronze badges
Add a comment
|
|
I am trying to make a dump of my MongoDB Atlas database using the command given under the database cmd line tools, under binary import and export tools. Here's a picture of where the command was taken from in Atlas:
I'm modifying the actual command a bit, here it is: mongodump --uri mongodb+srv://[email protected]/ -vvvv
The command seems to run and I get no error messages but it does not actually provide me with a dump of the database. I get the following message upon running the command in windows cmd:
It doesn't provide me with any information as to what could be going wrong. I tried disconnectiing my application from the databse before running the command but it didn't help.
|
Cannot dump MongoDB Atlas database. Get "dumping up to 0 collections in parallel"
|
0
You need to also backup the Packages folder, besides Assets and ProjectSettings.
The Packages folder contains the manifest.json file that describes all packages you use in your project. If you don't have it, packages will be missing in your project.
Share
Improve this answer
Follow
answered Mar 17, 2023 at 7:57
frankhermesfrankhermes
4,77011 gold badge2323 silver badges4040 bronze badges
Add a comment
|
|
This question already has answers here:
Cleaning up and Backup / Migrating existing Unity project into new one or another PC
(2 answers)
Closed 12 months ago.
How the best way to backup vuforia project smallest size in the best way.
Thanks
I tried backup Assets and Project Setting Folder, but vuforia not recognized in when im open backup project.
|
How to backup vuforia unty project smallest size? [duplicate]
|
0
Somewhat solved it? I just added a script doing a chmod postgres /var/backups into the /docker-entrypoint-initdb.d folder of the docker postgres container. Every script you put in that folder will be run when starting a container, so I just mounted a folder with the script from my host machine into that one.
It works but I don't know if it's the best practice.
Share
Improve this answer
Follow
answered Mar 12, 2023 at 22:14
Vincent AdamsVincent Adams
5188 bronze badges
Add a comment
|
|
I am using a postgres image and I would like to implement incremental backups. I have mounted a backup directory from my host machine to a backup dir in the container where postgres would supossedly archive the WAL files.
The archive_command in my config looks like this:
archive_command ='DIR="/var/backups/$(date +%Y%m%d)-wal"; (test -d "$DIR" || mkdir -p "$DIR") && gzip < "%p" > "$DIR/%f.gz"'
I've realized that the postgres user does not have the necessary permissions to write into /var/backups, and I believe every directory where it has permission to write is being used for something.
Any ideas on what I can do? Of course running a command like chown postgres /var/backups would simply rewrite the command of the original image, and I would like to use the base postgres image if possible. Isn't there any directory where postgres user can write to for these specific cases?
|
Having trouble implementing incremental backups (wal archiving) in a postgres container
|
0
As I mentioned in the question, the issue is on the server side.
There is an option on Samba, that "hide dot files", and it is enabled by default.
hide dot files (S)
This is a boolean parameter that controls whether files starting
with a dot appear as hidden files.
Default:
hide dot files = yes
I turned off this option on the Samba configuration file, restarted Samba service, and now the script works without issues.
If you are using OpenMediaVault like me, you can deselect the option on the web interface, editing the shared folder.
(screenshot)
Share
Improve this answer
Follow
answered Jul 15, 2023 at 8:08
TheFaxTheFax
122 bronze badges
Add a comment
|
|
I'm starting to use robocopy for my daily backups from my computer to my NAS (Openmediavault, with a Samba sharing).
The target is create a mirror of my Z:\ disk into the NAS.
To make it possible I use this line into a batch file:
robocopy "Z:\ " ^
"\\OMV\Backup\ " ^
/XD "?RECYCLE.BIN" "System Volume Information" "\\OMV\Backup\.recycle" ^
/XF "Thumbs.db" ^
/MIR ^
/IT ^
/COPY:DAT /DCOPY:DAT ^
/R:0 ^
/NDL ^
/FFT
It seems to works well, but everytime I start the script, robocopy copy every local file starting with "." into the NAS, even if it has not been modified. There are hundreds of these files on my disk, so this becomes an annoying behavior for me.
(extract from the log:
Variato 3515 Z:\Progetti\(2022-05-16) - Music Selector\sourcecode\home\thfx\.bashrc
Variato 4240 Z:\Progetti\(2022-05-16) - Music Selector\sourcecode\home\thfx\.bash_history
Variato 220 Z:\Progetti\(2022-05-16) - Music Selector\sourcecode\home\thfx\.bash_logout
Variato 675 Z:\Progetti\(2022-05-16) - Music Selector\sourcecode\home\thfx\.profile
Variato 66 Z:\Progetti\(2022-05-16) - Music Selector\sourcecode\home\thfx\.selected_editor
Variato 0 Z:\Progetti\(2022-05-16) - Music Selector\sourcecode\home\thfx\.Xauthority
Is there a trick to avoid this and copy the files only if they have actually changed?
(I fully understand that the problem is on the server side, where the Samba share doesn't handle file attributes in the same way windows does, but it seems strange to me that on internet there are not info about this strange behaviour.)
|
Robocopy and files starting with "." dot
|
0
The best way - and I dare to say the industry standard - is to have an Ansible playbook that describes your setup. And then you can run your playbook just for a different machine. I have Ansible playbook for every machine. Including my working notebook. If something is important then it goes into a playbook.
Too enterprisy for you? Just gimme the dammned one-liner boy?
copy content of /etc/yum.repos.d/*
save the output of rpm -qa on machine A
on machine B run: dnf install $(cat list.from.machine.A.txt)
Share
Improve this answer
Follow
answered Feb 24, 2023 at 9:58
msuchymsuchy
5,25211 gold badge1414 silver badges2626 bronze badges
Add a comment
|
|
Good morning,
Quite often, I need to create a backup server from a original server ( Rocky9). I have, thanks to Webmin, the exact configuration, but the original server got specific RPM packages installed ( while building and tuning it ) and to qo quick, if I had a list of all the RPM installed or a way to install a backup server with the same OS and the same RPM ( without missing one) , that could be quite efficient. Then I apply my config and that should work out straight out from the box !, which is not always the case. sometimes I can spend hours and hours, because one package hasn't been installed..
Is there an easy way to do so ? ( to backup the OS installation config ) An RPM command or which can generate a list of RPM package ? or an XML file, which can be used for re-installation ?
or should I find a script which list all installed packages and be able to use the another script to install the list of RPM packages ?
Thanks and regards,
looking for a way to gather RPM packages from an installation
|
Duplicating server : HOWTO get and install the exact / same RPM packages?
|
0
Elastic Load Balancers does not support failover routing per se.
You can instead use Route53, if it fits with your use case, with a failover routing policy
Share
Improve this answer
Follow
edited Feb 21, 2023 at 17:19
answered Feb 21, 2023 at 15:57
Filippo TestiniFilippo Testini
2,13011 gold badge33 silver badges1818 bronze badges
2
This is almost suitable, however my different endpoints have different paths. So any solution would need to be able to route the failover to a completely different URL, not just a new IP address resolution.
– CathalMF
Feb 23, 2023 at 10:39
A complete, and external, new route is not suitable but you can easily achieve it with a proxy mapping using the API Gateway.
– Filippo Testini
Feb 23, 2023 at 12:15
Add a comment
|
|
I current have a AWS hosted service which makes http requests to an external service provider. I want to add some backup in case the external service provider goes down (which has happened).
Does AWS have a load balancer suitable for balancing outbound connections which also balances based on some health checks.
My idea is that if my primary provider goes down or fails some other health checks we will fail over to some backup endpoints which are from other providers.
|
AWS outbound load balancing with health checks to multiple service providers
|
0
I can only confirm that this is currently not possible.
Most of cloud providers don't offer such option for SaaS databases or even PaaS. In my opinion, the only solution is to use services such as Data Factory or Databricks, and dump your data to a storage.
Copy and transform data in Azure Cosmos DB for NoSQL by using Azure Data Factory
The key question is whether such a backup will be valuable to you. I mean process performance and the data consistency.
Share
Improve this answer
Follow
answered Feb 21, 2023 at 7:17
jbgorskijbgorski
1,86499 silver badges1616 bronze badges
Add a comment
|
|
As a best practice after an environment is de allocated, we delete all the resources from the cloud, but we need the data stored in the cloud and also we are not able find any solution to take the Azure Cosmos DB backup locally
|
How to take Azure Cosmos db local backup
|
0
Error SQL72014
To resolve this error, you need to enable contained database authentication on the SQL Server instance as mentioned by @ShaktiSingh-MSFT
BY following @ Windhoek documentation you will find same issue is raised and resolved .
sp_configure 'contained database authentication', 1;
GO
RECONFIGURE;
GO
Here The sp_configure 'contained database authentication', 1 command is used to enable contained database authentication in SQL Server.
Refer this MS document for more information
You can Refer this SO thread also .
Share
Improve this answer
Follow
edited Mar 6, 2023 at 15:23
General Grievance
4,7153434 gold badges3434 silver badges4848 bronze badges
answered Feb 21, 2023 at 9:12
vijayavijaya
1,64311 gold badge33 silver badges66 bronze badges
Add a comment
|
|
When trying to restore a database backup from Azure SQL on a local SSMS (v19.x), it can't proceed and shows errors:
Could not import package.
Warning SQL72012: The object [Identity_2023-02-14_Data] exists in the target, but it will not be dropped even though you selected the 'Generate drop statements for objects that are in the target database but that are not in the source' check box.
Warning SQL72012: The object [Identity_2023-02-14_Log] exists in the target, but it will not be dropped even though you selected the 'Generate drop statements for objects that are in the target database but that are not in the source' check box.
Error SQL72014: Framework Microsoft SqlClient Data Provider: Msg 35221, Level 16, State 1, Line 1 Could not process the operation.
Always On Availability Groups replica manager is disabled on this instance of SQL Server. Enable Always On Availability Groups, by using the SQL Server Configuration Manager. Then, restart the SQL Server service, and retry the currently operation. For information about how to enable and disable Always On Availability Groups, see SQL Server Books Online.
Error SQL72045: Script execution error. The executed script:
ALTER DATABASE [$(DatabaseName)]
ADD FILE (NAME = [XTP_704A0C41], FILENAME = N'$(DefaultDataPath)$(DefaultFilePrefix)_XTP_704A0C41.mdf') TO FILEGROUP [XTP];
|
When trying to restore a database backup from Azure SQL on a local SSMS, it doesn't do it and shows errors
|
0
It is easy
Backup
To create a backup from the origin bitbucket repository:
Visit the Bitbucket site and go to the repository
Click the “clone“ button and copy the link
Before starting the clone process add --bare key for example git clone --bare [email protected]:<account-name>/<repo-name>.git
The cloned directory will get .git prefix like directory-name.git
This directory can be stored locally.
Restore To restore the local repository to the origin Bitbucket:
Visit the Bitbucket site and create a new repository
Get the origin link of the new repository [email protected]:<account-name>/<new-repository-name>.git
Add the remote origin path git remote add origin [email protected]:<account-name>/<bitbucket-repo-name>.git If needed check or remove an old remote path git remote -v and git remote rm origin
Push repository to new origin git push origin --mirror
In this case, all data including commits, branches, and all data will be transferred from local storage to the Bitbucket repository.
Share
Improve this answer
Follow
answered May 23, 2023 at 12:23
Igor-PotapovIgor-Potapov
7722 silver badges1111 bronze badges
Add a comment
|
|
I must say that I am fairly new to git which might reflect in my description of the problem. I have research ways to fully make a backup of a repository. Including all branches, tags, everything! I would like to have it as a restore point if my Bitbucket account would be compromised or if my Bitbucket repositories would, for some reason, disappear.
I have tried different things. But I sense I am not fully understanding what is happening and therefore not able to determine which approach is the correct one and why.
There are a lot of questions covering this subject, so I might have missed the obvious answer :-). However I am not able to fully understand if any of them consider all branches or just the one already checked out. If they consider to backup a remote cloud based repo and not a local version of it.
The main things I have explored is a normal git clone followed by a pull --all. I know now that the --all never will pull branches not already checked out. So here I fail to get all existing remote branches.
The second thing I tried was the git clone --mirror which to some extent is a full copy of the remote repository. I fail to understand what I get on disk after just this command. It is super fast and surely not downloading all content. The size is way under even a normal repository when zipped
So I did a three step rocket.
git clone -mirror
git clone "from the mirror" to "local repo". Then I get a repo with a working tree
git pull --all on the local repo created above in hope to get all remote branches as local branches
When done I did a git branch to examine what I got but it still seems like I have more remote branches then local ones.
I apologize if my lack of knowledge confuse things! Which approach shall I use?
I am thinking of one more and that is to do a normal clone. Then get all remote branches to loop them through and do checkout. Would that be a better option?
|
I want to automatically create a offline backup of git repositories that can be used to restore a failed cloud based repository like Bitbucket
|
Solved it. Job Activity monitor is not detailed and it says only "An error occurred". I found the error in this directory:
C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\Log
One of the databases are offline and SQL Server can't back it up. I deleted the offline database from the backup settings and it's working.
|
I have a backup issue in sql server. Backup plans are stopping working in first step of backup job.
this is the error description
Backup.LOG,,,The job failed. The Job was invoked by User sa. The last step to run was step 1 (LOG)
Server is have 500 gb of free storage. There is no disk error or low memory warning.
i have checked backup plans and checked if there is a read only protection on the disk but there isnt.
|
I have a backup error on SQL SERVER 12, how can i solve
|
0
Traditionally you can use pg_dump command to backup your postgres database
pg_dump -U postgres -h localhost -f <BACKUP_FILE> <DATABASE_NAME>
additionally Use a tool like rsync to copy the backup file to the NAS device. You can use a cron job to schedule the backup process to run periodically.
Hope it helps
Share
Improve this answer
Follow
answered Jan 9, 2023 at 14:22
vegiopsvegiops
30322 silver badges66 bronze badges
2
Does it work the same way with Docker running on Windows?
– Tom
Jan 9, 2023 at 17:15
these tools should work or you might need to investigate other flavours of rsync
– vegiops
Jan 16, 2023 at 17:14
Add a comment
|
|
I am struggeling to get this working. It is no problem for me to set up a Postgres Database with Docker and acess it from other clients with Dbeaver or PGAdmin. My problem is, I am not able to perform an automatic backup of the docker container or volume.
This is my Docker Compose file:
version: '3.8'
services:
db:
container_name: imatecTest
image: postgres
restart: always
environment:
POSTGRES_USER: root
POSTGRES_PASSWORD: root
POSTGRES_DB: test
ports:
- "5432:5432"
pgadmin:
container_name: pgadmin4
image: dpage/pgadmin4
restart: always
environment:
PGADMIN_DEFAULT_EMAIL: [email protected]
PGADMIN_DEFAULT_PASSWORD: root
ports:
- "5050:80
The only way it "worked" was this:
I saved the volume of the docker container locally in my windows directory and did my backup with windows-backup. The problem is I have to shut down my database during the backup otherwise there will be data loss.
Can you give me some advice how to perform an automatic backup of my volume or the whole container with PG_Dump or something else?
|
Is it possible to run a Postgres Database as a Docker container and backup it periodically to a NAS?
|
0
$MyServer = "localhost"
$DBs = Get-SqlDatabase -ServerInstance $MyServer | Out-GridView -OutputMode Multiple
foreach ($DB in $DBs) {
$dbName = $DB.Name
Write-Host $dbName
$date = Get-Date -format "yyyyMMdd_hhmmss"
Backup-SqlDatabase -ServerInstance $MyServer -Database $dbName -BackupFile C:\Backup\$MyServer-$dbName-$date.bak
}
Share
Improve this answer
Follow
answered Jan 7, 2023 at 6:17
Geri ReshefGeri Reshef
40711 gold badge77 silver badges1818 bronze badges
Add a comment
|
|
I try to backup multiple SQL Server databases using this code:
$date = Get-Date -format "yyyyMMdd_hhmmssfff"
Get-SqlDatabase -ServerInstance localhost | Out-GridView -PassThru | Backup-SqlDatabase -BackupFile C:\Backup\MyBackup_$date.bak
I'm able to combine the backup time in the backup file ($date), but not the name of the database,
and as a consequence - I get only one backup file because each one overrides the previous one (this code enables me to select multiple databases from a pop up list).
How can I get the database name in order to combine it in the bak file?
|
How can I get the backed-up Database name?
|
0
After reinstalling your app the backup files do not belong to your app anymore.
You could have checked that with File.exists() and File.canRead().
Use ACTION_OPEN_DOCUMENT_TREE to let the user select APP_NAME directory.
Or use ACTION_OPEN_DOCUMENT to let the user choose individual files.
Share
Improve this answer
Follow
answered Dec 15, 2022 at 14:11
blackappsblackapps
8,76122 gold badges1212 silver badges2626 bronze badges
4
i've try this and i got true in file.canRead() and file.exists();
– Urvish Vocsy
Dec 16, 2022 at 13:21
Hard to believe. Please post the relevant code in your post.
– blackapps
Dec 16, 2022 at 13:26
thankyou @blackapps but i found another way to fix my problem. lots of thankyou for help :)
– Urvish Vocsy
Dec 16, 2022 at 13:29
@UrvishVocsy how to fix that issue ?
– shahram_javadi
Dec 18, 2022 at 10:18
Add a comment
|
|
I need to create and restore a backup into my application. My backup logic is working well. but there was a problem for me that I describe below.
I create my backup and restore it without uninstalling the application. its working well
but when i create backup and uninstall the application, then i installing the application and try to restore it then i facing eorros.
My backup file location is /storage/emulated/0/Downloads/APP_NAME/BACKUP_FOLDER/BACKUP_NAME.xml. and i want to copy into /storage/emulated/0/Android/data/user/PACKAGE_NAME/shared_prefs/BACKUP_NAME.xml
and error is java.io.FileNotFoundException: /storage/emulated/0/Downloads/APP_NAME/BACKUP_FOLDER/BACKUP_NAME.xml: open failed: EACCES (Permission denied)
|
NOTES Alpha: Error copying file : open failed: EACCES (Permission denied)
|
0
Rather than using python to pipe a command into a new python interpreter, consider reading the file into a bytes object. You can do this using the open function (make sure to set mode="rb" to avoid decoding errors). The error you are hitting is because stdin is opened in text mode, which expects all input to be valid text, but you're reading in non-text (tarball) data.
For example:
with open("backup.ab", "rb") as input_file:
input_data = input_file.read()
with open("backup.tar", "wb") as output_file:
output_file.write(zlib.decompress(input_data))
You may find that your code is faster without using Python at all. For simple tasks like this, a simple shell script may be more suited (but less portable).
Share
Improve this answer
Follow
answered Dec 13, 2022 at 11:15
Hack5Hack5
3,4271818 silver badges3737 bronze badges
Add a comment
|
|
I'm writing a python script to make a Backup from an Android Application with adb:
os.system("adb backup -apk -nosystem " + app_identifier)
Than I m trying to parse the .ab file with python into tar so I can open it:
os.system( "dd if=backup.ab bs=1 skip=24 | python -c \"import zlib,sys;sys.stdout.write(" + "zlib.decompress(sys.stdin.read()))\" > backup.tar" )
Sometimes this works and sometime I'm getting following error:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/usr/lib/python3.10/codecs.py", line 322, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xda in position 1: invalid continuation byte
Do you know why this error pops? As I said sometimes it works, sometime not.
Asked google, it shows me results for other context.
Reinstalled python, reconnected android phone, ran the command in console
|
Python backup android application problem with decoding
|
0
Changing the upload limit size takes just 5 steps
Go to Plugin editor from the left menu panel in the admin dashboard.
Top right, Choose All-in-one WP Migration from the dropdown and click select.
Click on constants.php file.
Search for the word AI1WM_MAX_FILE_SIZE. Change it’s value, refer below code:
define( ‘AI1WM_MAX_FILE_SIZE’, 2 << 28 );
To
define( ‘AI1WM_MAX_FILE_SIZE’, 536870912 * 5 );
If you even want more than 3 GB then change value from 536870912 * 5 to 536870912 * 10( or any number).
Don’t forget to update the file.
Enjoy the new upload size
If the upload size has not changed go to the .htaccess file in the server root folder and add the below lines.
php_value upload_max_filesize 2048M
php_value post_max_size 2048M
php_value memory_limit 512M
php_value max_execution_time 300
php_value max_input_time 300
Share
Improve this answer
Follow
edited Nov 29, 2022 at 17:27
answered Nov 29, 2022 at 17:15
Kayesh BhuiyanKayesh Bhuiyan
122 bronze badges
Add a comment
|
|
Import 169,7 MB wordpress backup, but not running, if i refresh page but i see error, Does this happen if there is a difference in version?
uninstall mysql db and user then create new db & user installing wordpress
|
Wordpress backup import using all-in-one plugin not working
|
Have you tried changing the BackupPolicy variable to [parameters('inclusionTagValue')] (~ line 473)? You will first need to duplicate the built-in policy and make it custom. I am testing this now but it will take a little time for Azure policy to evaluate and remediate.
"variables": {
"backupFabric": "Azure",
"backupPolicy": "[parameters('inclusionTagValue')]",
"v2VmType": "Microsoft.Compute/virtualMachines",
"v2VmContainer": "iaasvmcontainer;iaasvmcontainerv2;",
"v2Vm": "vm;iaasvmcontainerv2;",
"vaultName": "[take(concat('RSVault-', parameters('location'), '-', guid(resourceGroup().id)),50)]"
}
|
TL;DR
Is it possible to get the name of a backup policy I'll want to apply, from a Virtual Machine tag value ? For example : backup=myBackupPolicyDaily1AM or backup=myBackupPolicyWeekly
Context
In Azure, I have these resources :
A Recovery Services vault
Within this Recovery Services vault, a Backup Policy named DefaultPolicy
An Azure policy duplicated from the builtin Configure backup on VMs with a given tag to an existing recovery services vault in the same location (https://learn.microsoft.com/en-us/azure/backup/backup-azure-auto-enable-backup#policy-2---configure-backup-on-vms-with-a-given-tag-to-an-existing-recovery-services-vault-in-the-same-location)
A virtual machine vmtest01 having a tag backup=backupme
What's working now
I've applied the Azure Policy with these parameters :
inclusionTagName: backup
inclusionTagValue: backupme
vaultLocation: West Europe (from dropdown list)
backupPolicyId: DefaultPolicy (From dropdown list, after selecting my recovery vault)
Things are working fine so far. After a Remediation, vmtest01 is backed up.
What I want now
Now I want to apply the Backup Policy name I'll have from the backup tag value. For example :
VMs having the tag backup=myBackupPolicyDaily1AM will have the myBackupPolicyDaily1AM Backup Policy
VMs having the tag backup=myBackupPolicyWeekly will have the myBackupPolicyWeekly Backup Policy
I've search on the interwebs and didn't see any example for that use case. Is it possible ?
Note: All the resources are in the same location.
|
In Azure, is it possible to retrieve a Backup Policy from a VM tag value?
|
0
The .git directory contains all your repository data. I.e., it has all the commits and version history. When you have access to the .git directory, you can restore the working directory (your actual data) using the git restore command. Move the .git directory into e.g. C:\mygit and run git restore C:\mygit.
As noted by @torek, it's unwise to use git as a general backup system. Git is not a backup or cloud file sync software. Don't put the contents of your Desktop directory into version control unless you have a strong need for it.
Share
Improve this answer
Follow
answered Oct 26, 2022 at 10:29
bahrepbahrep
30.2k1212 gold badges104104 silver badges152152 bronze badges
Add a comment
|
|
The .git folder that I have created into my desktop contains the data of around 64GB so i think to delete that folder to free up some space in my system
Now what to do to regain that data again ?
Which git command should I use to restore the data ?
I tried to free up the space by deleting the .git folder which contain 64GB of data. But almost whole data of my desktop was deleted .
|
By mistake I have created a .git folder in my DESKTOP and then deleted that folder my caused whole data of the desktop deleted
|
0
if you are using a project management software, it should be uploaded there for everyone part of the team to see.
then we just copy and save it to our dev pc when we need it.
Share
Improve this answer
Follow
answered Oct 17, 2022 at 0:18
Casey SVCasey SV
13088 bronze badges
Add a comment
|
|
I might work in the future with 3rd Party API-Keys that connect to very sensitive information and was thinking about how to back them up.
I was thinking about encrypting some text-document and upload it to some cloud-storage like Dropbox.
Or using something like KeePass:(?)
https://keepass.info/index.html
Any recomondations?
How have you been handling this stuff in your projects?
(in case there is a better place to ask on some stackExchange group, please leave a comment before closing)
|
Where to backup .ENV Files, API Keys, Project Passwords? Any best practices or how do you do it?
|
0
Check your Vaultlock retention period, and verify whether it is less than your backup retention
Share
Improve this answer
Follow
answered Jan 17 at 17:39
ncklleenckllee
1
Add a comment
|
|
I'm working on an AWS Backup Plan and successful to deploy the same, however while adding an extra Backup Vault attributes/settings like LockConfiguration it reports and issue on the console Backup job failed because the lifecycle is outside the valid range for backup vault. which i'm not able to get it, I see the AWS Backup documentation is not updated well.
While looking for the the error around goggling i didn't find any clue.
Below is the cloudformation code which i'm using :
Code: cloudformation
BackupsVault:
Type: "AWS::Backup::BackupVault"
Properties:
BackupVaultName: MyVault
LockConfiguration:
ChangeableForDays: 3
MinRetentionDays: 4
MaxRetentionDays: 5
BackupPlan:
Type: "AWS::Backup::BackupPlan"
Properties:
BackupPlan:
BackupPlanName: !Ref myBackupPlan
BackupPlanRule:
-
RuleName: !Ref MyFsxRule
TargetBackupVault: !Ref BackupsVault
ScheduleExpression: "cron(45 13 ? * * *)"
Lifecycle:
DeleteAfterDays: 21
DependsOn: BackupsVault
Error:
Backup job failed because the lifecycle is outside the valid range for backup vault.
Is there anything which is conflicting between BackupVAlt and backupPlan ? , Please see if you guys already know about this and can hint me to the right direction.
|
aws Backup job failed because the lifecycle is outside the valid range for backup vault
|
0
I'm guessing you are using AWS Backup to handle the backups of your DynamoDB tables. In that case you may want to review this document which will help you delete a backup plan: https://docs.aws.amazon.com/aws-backup/latest/devguide/deleting-backups.html
Share
Improve this answer
Follow
answered Oct 10, 2022 at 18:59
Leeroy HanniganLeeroy Hannigan
15.9k33 gold badges1919 silver badges3737 bronze badges
Add a comment
|
|
When I am trying to delete a backup on dynamoDb I am prompted with the following error message:
Invalid Request: User is not allowed to delete the system backup with arn
arn:****. It will automatically expire on ***.
My account has however administration access . Is there a policy that does not allow back ups to be removed for a certain amount of time ?
|
How to delete a dynamoDb backup
|
0
When you back up your iPhone to your MacBook, backups are usually stored in ~/Library/Application Support/MobileSync/Backup/. Locating the backups on the external drive and copying the one you want to restore to this location should make it restorable. See also: Locate backups of your iPhone, iPad, and iPod touch.
If there is not enough space available on your Macbook so you can't copy the backup over, or if you want a less-effort solution that you can use to easily backup and restore using an external drive, you can use a symlink to map a folder on your external drive to the above path. See this article for instructions: How to Back Up Your iPhone to a Different Location on Your Mac.
Share
Improve this answer
Follow
edited Jan 12, 2023 at 14:38
answered Jan 12, 2023 at 14:37
LukasLukas
111 bronze badge
Add a comment
|
|
I have been doing iPhone backup in external hard drive for quite sometime. This helps me to save lot of hard disk space of my Macbook. I connect iPhone via USB cable to my MacBook and open iPhone sync window in Finder. When I click Back Up Now or Sync button, the data in iPhone will do backup to my external SSD instead of MacBook local storage.
https://www.imore.com/how-move-your-iphone-or-ipad-backups-external-hard-drive
But later I realised that I may not be able to restore backup if I follow this practice of doing backup to external storage. Restore backup button remains disabled now in Sync window in Finder because there is no local backup for iPhone. Can someone help me to know how to restore backup of iPhone from external storage?
|
iPhone Backup in External Storage
|
0
#!/bin/bash
now=$(date +"%Y-%m-%d" -d "0 day ago");
rsync -azvr -e 'sshpass -p "YOUR_PASSWORD" ssh -p YOUR_PORT_SSH_IF_NEEDED -o StrictHostKeyChecking=no' root@HOSTAME:"/var/backup/web*/*${now}*" /cygdrive/h/YOUR_SPECIAL_PATH_ON_LOCAL_MACHINE/var-backup-${now}
In fact, in ispconfig panel on linux server the backup files is in:
/var/backup/web-TODAY_DATE/
and I transfer all this files in
H:/YOUR_SPECIAL_PATH_ON_LOCAL_MACHINE/var-backup-${now}
Share
Improve this answer
Follow
answered Sep 27, 2022 at 6:53
OrnotOrnot
551010 bronze badges
Add a comment
|
|
My script is for backup web directory and SQL files of all the Website in ispconfig panel on my local machine (not on linux dedicated server).
The script is cygwin bash.
|
How to create a cygwin script to automatic backup on local PC in ispconfig
|
0
Currently, you are supplying only one element to the with_items, that is, /home/volumeid, meaning your loop will iterate only once for the file name and not its contents.
You need to use the file lookup if you are on localhost or the slurp module on the remote host. Example:
For the localhost:
- name: Show the volume id from the file
debug:
msg: "{{ item }}"
loop: "{{ lookup('file', '/home/volumeid').splitlines() }}"
For the remote host:
- name: Alternate if the file is on remote host
ansible.builtin.slurp:
src: /home/volumeid
register: vol_data
- name: Show the volume id from the file
debug:
msg: "{{ item }}"
loop: "{{ (vol_data['content'] | b64decode).splitlines() }}"
Share
Improve this answer
Follow
answered Sep 26, 2022 at 14:10
P....P....
17.9k33 gold badges3333 silver badges5353 bronze badges
Add a comment
|
|
I took volumes 'in-use' of OpenStack instance and filtered those volume ids into a file from which it has to make a backup
shell: openstack volume list | grep 'in-use' | awk '{print $2}' > /home/volumeid
shell: openstack volume backup create {{ item }}
with_items:
- /home/volumeid
error shows like
**failed: [controller2] (item=volumeid) => {"ansible_loop_var": "item", "changed": true, "cmd": "openstack volume backup create volumeid", "delta": "0:00:03.682611", "end": "2022-09-26 12:01:59.961613", "item": "volumeid", "msg": "non-zero return code", "rc": 1, "start": "2022-09-26 12:01:56.279002", "stderr": "No volume with a name or ID of 'volumeid' exists.", "stderr_lines": ["No volume with a name or ID of 'volumeid' exists."], "stdout": "", "stdout_lines": []}
failed: [controller1] (item=volumeid) => {"ansible_loop_var": "item", "changed": true, "cmd": "openstack volume backup create volumeid", "delta": "0:00:04.020051", "end": "2022-09-26 12:02:00.280130", "item": "volumeid", "msg": "non-zero return code", "rc": 1, "start": "2022-09-26 12:01:56.260079", "stderr": "No volume with a name or ID of 'volumeid' exists.", "stderr_lines": ["No volume with a name or ID of 'volumeid' exists."], "stdout": "", "stdout_lines": []}**
Can someone say how to create the volume backup from that file (which has volume ids) in the ansible playbook?
|
How to repeat the same command in creating the openstack backup?
|
V$RECOVERY_FILE_DEST shows target OS or ASM directories, as specified in the db_recovery_area_dir initialization parameter, which are designated to contain flash recovery area files. If there is nothing visible in the table, confirm the setting of the initialization parameter. If both are null/empty, then there is no Flash Recovery Area defined for the database.
select * from v$recovery_file_dest;
NAME SPACE_LIMIT SPACE_USED SPACE_RECLAIMABLE NUMBER_OF_FILES CON_ID
----- ----------- ---------- ----------------- --------------- ----------
+RECO 1.3980E+10 8989442048 0 5 0
show parameter db_recovery
NAME TYPE VALUE
-------------------------- ----------- ------
db_recovery_file_dest string +RECO
db_recovery_file_dest_size big integer 13332M
|
I get 0 rows in the results I use
SELECT * from V$RECOVERY_FILE_DEST
V$RECOVERY_FILE_DEST is a default table name for the oracle Flash recovery area.
Does this mean that a Backup never happens on this platform or Flash recovery area name is different from the default? If yes, then how can I locate the Flash Recovery area on Oracle Database if yes?
|
Why Oracle SQL Flash Recover Area is blank?
|
0
If the project files still exist (amplify directory), you may be able to re-create the project with the existing resources.
One idea could be to clone the git repository from when the amplify project files were intact and run amplify init
OR
amplify-backup is generally generated automatically when doing commands with amplify. You could try rename to amplify and run amplify init.
See more here for re-creating an amplify project on another account: https://docs.amplify.aws/cli/migration/cli-migrate-aws-account/
Share
Improve this answer
Follow
answered Jan 18, 2023 at 6:14
Dylan wDylan w
2,73011 gold badge2020 silver badges3131 bronze badges
Add a comment
|
|
I've gotten an Amplify project dropped in my lap where the backend environment is deleted (or lost when the project were moved to another account).
I haven't worked with Amplify before, so I'm not sure how "automatic" everything is.
I noticed that the project has a folder called 'amplify-backup' which contain a bunch of json and graphql config files, so I assumed that I could use those somehow to restore the backend environment in AWS, but I can't seem to find any information on how to do so.
There's currently no backend environment in the AWS console and I don't really know which services the backend environment should contain.
Is it possible to restore the backend environment and all the services that the application need or do I need to figure out which services are needed?
If so, any pointers on how to find which services that are used?
|
Restore amplify backend environment
|
The best method is to use the Azure Backup Service
https://learn.microsoft.com/en-us/azure/backup/backup-overview
|
We have one Linux and one Windows VM that we no longer have a need for. Before we delete the VM, we would like to back it in case we need to go back to it. What is the best way to do this?
Thank you!
Sanjeev
|
Backing up a VM before deleting
|
0
see Overlapping backup rules in this doc it could be the reason you’re only getting 6 per day if you have other rules in addition to the hourly one
careful, complete within is only there to error the backup out if it exceeds the window, it doesn’t speed the backup up or anything like that
aws support clarified for me: since you are starting your backup job every day at 8AM, the backup jobs will run every hour starting at 8AM - 11:55 PM. From 12AM to 7:55 AM there will not be any jobs created.
so i changed my hourly job to start at 12:00am and start within 1 hour. once i saved it the job view screen stopped displaying start time at all.
Share
Improve this answer
Follow
edited Mar 2, 2023 at 15:03
answered Mar 2, 2023 at 3:56
user433342user433342
88911 gold badge77 silver badges2828 bronze badges
Add a comment
|
|
I have a question about AWS backup window.
It says in the documentation that the default backup start at 5AM UTC and with the backup window of 8 hours.
I want to create backups of EC2 instance on every hour of the day which means I should have 24 backups at the end of a day. I have chosen the backup frequency to be hourly, start time to be 6:00PM UTC, start within 1 hours and completion time to be within 2 hours. From this setup, I just got 6 backups in one day.
So my question is, should I put 12am as the start time of the backup window to ensure 24 backups in a day ?
What exactly does it mean when it says that default AWS backup time is 5AM UTC with backup window of 8 hours? is 8 hours backup window start within time or it just means that backups can only happen within those 8 hours especially when we select hourly backups? That would mean that we would have only 8 backups per days.
|
AWS backup window
|
0
Both ways won't reliably detect all kinds of data corruption. zero_damaged_pages won't help at all, and checksums will only detect data corruption on disk (as opposed to corruption in RAM or caused by database bugs).
Some data corruption can be detected by nasty error messages, for example if you dump the database (which selects all data). Other types of data corruption causes no errors, but bad results.
Share
Improve this answer
Follow
answered Aug 23, 2022 at 14:33
Laurenz AlbeLaurenz Albe
225k1818 gold badges234234 silver badges303303 bronze badges
Add a comment
|
|
Referring to the official documentation, I now find that there are two ways:
set param zero_damaged_pages on, However, it is not recommended because data loss may occur, and I do not Know if the database is corrupt;
set checksum on, This can cause significant performance costs, and You can find the database is corrupted when you query
I want to restore data using a backup database when find current database is corrupted on embedded device. Is there more convenient way to find out whether postgresql database is corrupt just like sqlite? In sqlite, Database corruption can be detected by the API return value:
#define SQLITE_CORRUPT 11 /* The database disk image is malformed */
|
How to detect whether postgresql database is corrupted?
|
0
i'm confused how you have enough space to copy the entire pgdata and WAL files but do not have space to do a backup? Considering that you seem to believe backups are important (which they are if this is a critical system), there must be a way to cobble some storage either via NFS or S3 (cloudian or minio would work here) to do a proper gpbackup - it will save you a ton of time should you ever need to use this backup.
Share
Improve this answer
Follow
answered Aug 15, 2022 at 15:27
Jacque IstokJacque Istok
4422 bronze badges
Add a comment
|
|
We have a greenplum 6.x instance. About 40 segment servers.
We have to backup the instance.
I know there are two main methods to backup greenplum instance
gpbackup for parallel backup. The main recommended method as I understand.
pg_dump non parallel backup that has to go through the master. (Not recommended because of slow performance) pg_dump and pg_restore is available for compatibility with standard postgres databases. (pg_dump / pg_restore)
But we cannot use a gpbackup: we do not have free space to keep a backup files. There is no enough free space on greenplum servers. And yet we don't have S3, NAS shared folder or data domain.
The only way that - theoretically - we have is to backup directories /pgdata/ - on all servers of the greenplum instance.
So the idea: perform a pg_start_backup command - copy entire directory /pgdata/ - copy WAL files to keep the backup consistency.
But I cannot understand how to perform a pg_start_backup command - on all PostgreSQL instances - members of the greenplum instance
|
backup a greenplum instance: custom script
|
0
Actually, the simplest way is to use crontab+shell. Of course, there are also many paid software that can be implemented. If you want to back up, it is recommended that you back up your data files and configuration files so that they can be recovered. It is also the safest. The backup of mysql's installation files is of little significance, because it can be downloaded from official website at any time.
Share
Improve this answer
Follow
answered Aug 6, 2022 at 1:35
RP.SRP.S
66155 silver badges1111 bronze badges
Add a comment
|
|
I know about database backups for MySQL, but how would one go about making a backup of entire MySql configuration along with the databases?
I'm building few set of services/tools and I need to provide instructions so not-so technical people could restore them if I wouldn't be available. One of the things I need to backup is self hosted MySql.
Can I just zip MySql folder and ProgramData MySql folder and would that make a complete restore solution? Just unzip and run service?
|
Making backup of MySQL database and server
|
0
I found the issue - no data was loaded because when loading partial data from dump_instance the list of files that need to be copied are as follows:
@.* - dump metadata files
SCHEMA@* - schema data files
SCHEMA.* - schema metadata fils (these are the files that were missing and caused no data to load)
Share
Improve this answer
Follow
answered Aug 2, 2022 at 8:18
JulesJules
3911 gold badge11 silver badge44 bronze badges
Add a comment
|
|
I am using MySQL shell utilities 8.0.23 for both dump and load and the process goes as follows:
On one host:
I use util.dump_Instance() to create a dump of a mysql 8.0.23 instance with multiple schemas.
On another host:
I download partial files from the dump - all the files relevant to a specific schema and the metadata files (@.;${SCHEMA}@)
when I use the load_dump instance it seems the data is not loaded.
here is my code:
util.load_dump(RESTORE_PATH, {'threads': THREADS, 'showProgress': True, 'includeSchemas': [INCLUDE_SCHEMA], 'excludeTables': [EXCLUDE_TABLES], 'loadData': True, 'loadDdl': True })
and the result I am getting when running this process using CI is as follows:
Opening dump...
15:05:40 Target is MySQL 8.0.23. Dump was produced from MySQL 8.0.23
15:05:40 Scanning metadata \ 0 / ~0
Scanning metadata - done
15:05:40 Checking for pre-existing objects...
15:05:40 Executing common preamble SQL
15:05:40 Executing DDL \ 0 / ~0
Executing DDL - done
15:05:40 Executing view DDL \ 0 / ~0
Executing view DDL - done
15:05:40 Starting data load
15:05:40
15:05:40 Recreating indexes \ 0 / 0
Recreating indexes - done
15:05:40 Executing common postamble SQL
15:05:40 No data loaded.
15:05:40 0 warnings were reported during the load.
What am I missing? why is my data not loaded? what can I do to fix this so my data will be loaded properly?
|
Why does mysql shell not load any data when I use util.load_dump()?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.