Response
stringlengths 8
2k
| Instruction
stringlengths 18
2k
| Prompt
stringlengths 14
160
|
---|---|---|
Include the SearchOption.AllDirectories and you will get all sub directories:
DirectoryInfo dir = new DirectoryInfo(sourceDirName);
DirectoryInfo[] dirs = dir.GetDirectories("*", SearchOption.AllDirectories);
when you now loop through the directories, you will have also the first level of subdirectories and for each directory just get the files that it contains
foreach (DirectoryInfo subdir in dirs)
{
FileInfo[] files = subdir.GetFiles();
......
|
So I want to log what happens when I backup files but I'm not sure how to make it work for files in subdirectories aswell.
Right now I have this code that works for all files in the selected directory but doesn't work for subdirectory files
private void LogBackup(string sourceDirName, string destDirName)
{
List<string> lines = new List<string>();
string logDestination = this.tbox_LogFiles.Text;
string dateString = DateTime.Now.ToString("MM-dd-yyyy_H.mm.ss");
DirectoryInfo dir = new DirectoryInfo(sourceDirName);
DirectoryInfo[] dirs = dir.GetDirectories();
lines.Add("FILES TO COPY:");
lines.Add("--------------");
FileInfo[] files = dir.GetFiles();
foreach (FileInfo file in files
.Where(f => !extensionsToSkip.Contains(f.Extension) && !filesToSkip.Contains(f.FullName)).ToList())
{
string desttemppath = Path.Combine(destDirName, file.Name);
string sourcetemppath = Path.Combine(sourceDirName, file.Name);
lines.Add("SOURCE FILE:");
lines.Add(sourcetemppath);
lines.Add("DESTINATION FILE:");
lines.Add(desttemppath);
lines.Add("");
}
foreach (DirectoryInfo subdir in dirs
.Where(f => !foldersToSkip.Contains(f.FullName)))
{
//NOT SURE WHAT TO WRITE HERE
}
using (StreamWriter writer = new StreamWriter(logDestination + @"\LOG " + dateString + ".txt"))
{
foreach (string line in lines)
{
writer.WriteLine(line);
}
}
}
Any ideas please?
|
Creating a log of a backup C#
|
This code sample is a bit on the verbose side to make it easier to understand, but the table msdb.dbo.backupset can help you with this.
https://learn.microsoft.com/en-us/sql/relational-databases/system-tables/backupset-transact-sql
declare @MostRecentAuthorizedFullBackup datetime
declare @MostRecentUnauthorizedNonCopyOnlyBackup datetime
declare @AuthorizedBackupUser nvarchar(128) = N'YourBackupUser'
declare @DatabaseName nvarchar(128) = N'YourDatabase'
select @MostRecentAuthorizedFullBackup = backup_start_date
from msdb.dbo.backupset
where database_name = @DatabaseName
and is_copy_only = 0
and type = 'D'
and user_name = @AuthorizedBackupUser
order by backup_start_date desc
select @MostRecentUnauthorizedNonCopyOnlyBackup = backup_start_date
from msdb.dbo.backupset
where database_name = @DatabaseName
and is_copy_only = 0
and type = 'D'
and user_name <> @AuthorizedBackupUser
order by backup_start_date desc
if @MostRecentAuthorizedFullBackup > @MostRecentUnauthorizedNonCopyOnlyBackup
begin
print 'Differential base is good'
end
else
begin
print 'Differential base is bad'
end
|
I do a full backup once a month and then incremental backups in between.
But meanwhile another users could do full backup and that break my chain. I know that there is a full back up with copy only.
But in my case I can't know when and who will do the backup, so I need to find a solution to implement on my side to avoid this problem.
Does someone have an idea or a solution to implement? Thank you very much.
|
Is there an other solution than copy-only full backup?
|
0
I've found workaround, in the Vertica backup file /opt/vertica/bin/vbr.py I have changed ulimit -u to ulimit -n.
Share
Follow
answered Apr 21, 2017 at 10:33
wlodi83wlodi83
12311 silver badge1313 bronze badges
Add a comment
|
|
I try to do full backup on Vertica database. When I execute command:
/opt/vertica/bin/vbr.py --debug 3 --task backup --config-file vertica_backup.ini
I am getting following error:
/bin/sh: 1: ulimit: Illegal option -u
Traceback (most recent call last):
File "/opt/vertica/bin/vbr.py", line 2526, in backup
prepareAll()
File "/opt/vertica/bin/vbr.py", line 1888, in prepareAll
configCheck()
File "/opt/vertica/bin/vbr.py", line 506, in configCheck
concurrency_upperboud = int(subprocess.Popen(['ulimit -u'],
shell=True, stdout= subprocess.PIPE).communicate()[0].strip())
ValueError: invalid literal for int() with base 10: ''
backup failed unexpectedly!
My vertica_backup.ini file:
[Misc]
snapshotName = vertica_backup
restorePointLimit = 1
passwordFile = vertica
[Database]
dbName = dwh_vertica
dbUser = dbadmin
[Transmission]
[Mapping]
v_dwh_vertica_node0001 = vertica1:/home/dbadmin/backups
It's Debian Wheezy:
Linux vertica1 3.16.0-0.bpo.4-amd64 #1 SMP Debian 3.16.7-ckt11-1~bpo70+1 (2015-06-08) x86_64 GNU/Linux
/bin/sh points to /bin/sh -> dash
|
Vertica backup error
|
You could write a Copy recursive function like this (Pseudo code)
public void CopyData(string sourceDirectoryPath, string desDirectoryPath)
{
foreach File in sourceDirectoryPath
{
copy to desDirectoryPath
}
foreach currentDirectory in sourceDirectoryPath
{
// recursive function
CopyData( currentDirectory, correspondingdesDirectoryPath)
}
}
|
private void btn_Backup_Click(object sender, EventArgs e)
{
List<DirectoryInfo> SourceDir = this.lbox_Sources.Items.Cast<DirectoryInfo>().ToList();
List<DirectoryInfo> TargetDir = this.lbox_Targets.Items.Cast<DirectoryInfo>().ToList();
foreach (DirectoryInfo sourcedir in SourceDir)
{
foreach (DirectoryInfo targetdir in TargetDir)
{
string dateString = DateTime.Now.ToString("MM-dd-yyyy_H.mm.ss");
if (this.checkbox_zipfiles.Checked == true)
{
System.IO.Compression.ZipFile.CreateFromDirectory(sourcedir.FullName, targetdir.FullName + @"\BACKUP_" + sourcedir.Name + @"_" + dateString + @".zip");
LogBackup();
}
else
{
foreach (var file in sourcedir.GetFiles()
.Where(f => !extensionsToSkip.Contains(f.Extension) && !filesToSkip.Contains(f.FullName)).ToList())
{
file.CopyTo(targetdir.FullName + @"\" + file.Name, true);
LogBackup();
}
}
}
}
So far I have this code that only works on files, how do I make it work for folders and subfolders and files in folders?
|
Program that copies only certain files and folders C#
|
0
This is possible but it makes no sense and would take a bit of hacking together. This is one of those situations where if you have to ask how to do it, you probably won't be able to do it.
My advice would be to integrate WordPress normally and just have WP as part of your backup.
Keep WP as lean as possible, install few plugins and only have 1 theme installed. If you want to be really secure, limit access to your WP Admin by IP address and ensure all permissions are set restrictively.
Share
Follow
answered Apr 16, 2017 at 6:48
Ben TideswellBen Tideswell
1,69722 gold badges1111 silver badges1414 bronze badges
Add a comment
|
|
I have a magento store running on name.pippo.com
Now I am considering to install wordpress for blogging.
Since I would like to integrate magento + wordpress as a fully integrate system (maybe with magento fishpig extension), i would like to knwo how to obtain same result but installing wordpress in a subfolder of my TLD, i.e. www.pippo.com/wp
can i Do that?
The main reason is to have both system separate and avoiding magento system backup to back up wordpress too. I would like to have magento on its own, in a thirdlevel domain keeping the magento installation as much clean as I can.
thank you very much, I hope i was clear. I appreciate any advise.
|
How to exclude wordpress from magento root
|
0
Oups, without code my answer is null.
layerTile.getSource().setUrl('file:///local/{z}/{x}/{y}.jpg');
var serverBackup='https://{a-c}.tile.openstreetmap.org/';
var errorTilePath=urlBase+'css/images/error.png';
layerTile.getSource().setTileLoadFunction((function() {
return function(tile, src) {
if (UrlExists(src)) {
tile.getImage().src=src;
} else {
if (src.substr(0,4)=='file') {
var tmp=src.split('/').reverse();
src='https://'+['a', 'b', 'c'].sort(function() {return 0.5 - Math.random()})[0]+'.tile.openstreetmap.org/'+tmp[2]+'/'+tmp[1]+'/'+tmp[0].split('.')[0]+'.png';
if (UrlExists(src)) {
tile.getImage().src=src;
} else {
tile.getImage().src=errorTilePath;
}
} else {
tile.getImage().src=errorTilePath;
}
}
};
})());
function UrlExists(url){
try {
var http = new XMLHttpRequest();
http.open('HEAD', url, false);
http.send();
return http.status==200||http.status==403;
} catch(err){return false;}
}
Share
Follow
answered Apr 6, 2017 at 4:54
SlayesSlayes
43511 gold badge66 silver badges1515 bronze badges
Add a comment
|
|
I'm trying to add a backup route for a tiles with ol3. I would like to test on the errorload event if the source url starting by "http".
If "yes" : replace this tile by a custom tile.
If "no" : change the source url of this tile by another one and retry
I think i need to use something like that :
layerTile.getSource().setUrl('file:///local/{z}/{x}/{y}.jpg');
var errorTilePath='https://image.noelshack.com/fichiers/2017/14/1491403614-errortile.png';
var serverBackup='http://otile1.mqcdn.com/tiles/1.0.0/map/';
layerTile.getSource().setTileLoadFunction((function() {
var tileLoadFn = layerTile.getSource().getTileLoadFunction();
return function(tile, src) {
var image = tile.getImage();
image.onload = function() {console.log('Tile ok : ' + src); };
image.onerror = function() {
console.log('Tile error : ' + src);
console.log(tile);
if (src.substr(0,4)!='http') {
var tmp=src.split('/').reverse();
var serverBackupPath=serverBackup+tmp[2]+'/'+tmp[1]+'/'+tmp[0].split('.')[0]+'.png';
console.log("Second url : " + serverBackupPath)
src=serverBackupPath;
tile.getImage().src=src;
var image = tile.getImage();
image.onload = function() {console.log('Tile backup ok : ' + src);};
image.onerror = function() {console.log('Tile backup error : ' + src); src=errorTilePath; tile.getImage().src=src; tileLoadFn(tile, src);}
} else {
console.log('Custom tile : ');
src=errorTilePath;
tile.getImage().src=src;
}
tileLoadFn(tile, src);
};
tileLoadFn(tile, src);
};
})());
With that, I can see that the backup tile is downloaded but not visible on map.
Certainely, I misunderstood something.
Thanks in advance if somebody could help me.
|
OL3 add backup url
|
0
If you create a db by using Odoo's db manager (interface) there already will be basic tables (the module base will be installed automatically).
There are some ways to restore the db. For example (template0 is a default template db from postgres):
createdb -T template0 newdbname
cat backupfilename | psql newdbname
You shouldn't have a Odoo server running while doing this.
You could also use Odoo's database interface to backup and restore/duplicate databases.
Share
Follow
answered Apr 3, 2017 at 13:06
CZoellnerCZoellner
14k33 gold badges2626 silver badges4040 bronze badges
Add a comment
|
|
Backup command: pg_dump -U username backupdbname -f backupfilename.sql
Restore Command: psql -v ON_ERROR_STOP=1 -f backupfilename.sql -d newdbname;
Actually Tried this command. Backup is working. But while restoring it will throw error psql:pr_staging.sql:7624: ERROR: relation "res_company" already exists. Because for restore, we need one newdb. So that i am creating newdb from browser manually. Thats why i facing the error.
I am creating a new db using terminal command. But it not showing in browser localhost:8069/web/database/selector.
How to restore the backup db?
|
psql:pr_staging.sql:7624: ERROR: relation "res_company" already exists
|
0
1) I run "chkdsk /F"
2) reboot my computer
3) run "C:\MariaDB\bin mysql.exe -uroot -p < full_db_backup.sql"
Now, it works. My speculation is that is something related to hardware. ErrorCode 22 might not be an MariaDB error code at all. It is some OS errorcode passed to MariaDB. I tried 2) and 3) a few times before, it does NOT work. So "chkdsk /F"
is the magic here.
Share
Follow
answered Apr 4, 2017 at 0:40
david.hilbertdavid.hilbert
1
Add a comment
|
|
I run the following to dump full database
C:\MariaDB\bin mysqldump.exe -uroot -p --single-transaction --flush-logs --master-data=2 --all-databases > full_db_backup.sql
on one computer.
Then, on another machine, I reinstall a fresh new MariaDB 10.1.22. And populate this new database instance with the following:
C:\MariaDB\bin mysql.exe -uroot -p < full_db_backup.sql
After running for half a hour, I get the following error
mysql.exe: Error reading file '' (Errcode: 22 "Invalid argument")
This error even does not have enough information for me to debug or chase down. The sql dump is 90GB and pretty large. It is hopeless to grep '' from that file. I even have no idea how to start to investigate this problem. By the way, both original database instance and new database instance are MariaDB 10.1.22.
|
restoring mariadb using sqldump generated sql file throw errorcode 22
|
As pointed out by s.m. in the comment above, the call to ZipFile.CreateFromDirectory() will attempt to create a zip file with the same location and file name for all the source directories.
If the intention is to create a single archive containing files from all the source directories, then the Zipfile.CreateFromDirectory() "shortcut" method cannot be used. Instead, you need to call ZipFile.Open(), get a ZipArchive object and use its CreateEntry() method to add every file individually.
|
private void btn_Backup_Click(object sender, EventArgs e)
{
List<DirectoryInfo> SourceDir = this.lbox_Sources.Items.Cast<DirectoryInfo>().ToList();
string TargetDir = this.tbox_Target.Text;
foreach (DirectoryInfo directory in SourceDir)
{
foreach (var file in directory.GetFiles())
if (this.checkbox_zipfiles.Checked == true)
{
System.IO.Compression.ZipFile.CreateFromDirectory(directory.FullName, TargetDir + @"\test.zip");
}
else
{
Microsoft.VisualBasic.FileIO.FileSystem.CopyDirectory(directory.FullName, TargetDir, true);
}
}
}
I'm creating a backup application and when I try to zip the files I need to backup it says: "The file 'C:\Users\Lada1208\Desktop\test\test.zip' already exists."
even thought the folder is empty before so it's trying to create the test.zip file two times for some reason. Any idea why?
|
IOException file already exists C#
|
0
Have a look at this answer.
Fetch all rows in cassandra
It's just a matter of adding code to export let's say every row to csv or some similar format that would be fine for you.
You will also have to write script to load this data, but those are just simple inserts.
Share
Follow
edited May 23, 2017 at 12:10
CommunityBot
111 silver badge
answered Apr 1, 2017 at 21:42
Marko ŠvaljekMarko Švaljek
2,08111 gold badge1515 silver badges2626 bronze badges
Add a comment
|
|
Is there a way that i can backup a single table in Apache Cassandra with java code. i want to run such code once every week using a scheduler. Can some one share links to such resources, if there any?
|
Cassandra back using java code
|
0
is your desired output is something like:
$host/mnt/synology/Torrents/Games/
where $host is the name of each one of these ips: (192.168.1.40 192.168.1.41 192.168.1.42 192.168.1.43) ?
when building the path for mkdir you are doing $(hostname) but that command's output will be your local machine name; it won't run in each host.
if you want each hosts name you should launch that command through ssh in each IP and retrieve the output.
Share
Follow
answered Mar 23, 2017 at 15:17
odradekodradek
99177 silver badges1414 bronze badges
Add a comment
|
|
I'm trying to learn to write some simple bash scripts and I want to create a backup script that will use rsync to fetch predetermined directories and sync them to a backup machine. Here is the code:
#!/bin/bash
#Specify the hosts
ip=(192.168.1.40 192.168.1.41 192.168.1.42 192.168.1.43)
#currently unused
webdirs=(/etc/nginx/sites-available/ /var/www/ghost)
#Directory to store everything
NAS=/mnt/synology/Torrents/Games/
#Remote-hosts to rsync from
for i in "${ip[@]}"
do
HOSTNAME=$(hostname)
NAS2=$HOSTNAME$NAS
if [ ! -d "$NAS2" ]; then
echo $NAS2 "does not exist, creating..."
mkdir -p $NAS2
else
echo "inside the else"
sudo rsync -anvzP -e "ssh -i $HOME/.ssh/id_rsa" victor@$i:/etc $NAS2/
fi
done;
It's not done but I've ran into a problem. I can't figure out how to create new directories for each machine. Right now it's only creating the directory for my web server.
EDIT: I solved it by using ssh and command substitution, all I did was this:
HOSTNAME=$(ssh user@$i "hostname")
The variable $HOSTNAME will change after each iteration. Exactly what I want.
|
How do I loop through an array of ip adresses to get the hostname of each machine in bash?
|
0
Checkout Multcloud. This website allow to manage all your cloud spaces together. You can add your company's drive a/c and other drive a/c or dropbox or anything, then sync company a/c with the a/c that you want.
Share
Follow
answered Mar 17, 2017 at 12:33
Suhail AkhtarSuhail Akhtar
1,8351616 silver badges2929 bronze badges
2
Nice, tks you, maybe a free solution exists ? (cron feature is premuim in multcloud)
– user662264
Mar 18, 2017 at 13:59
you can use the sync, feature its free.
– Suhail Akhtar
Mar 18, 2017 at 19:05
Add a comment
|
|
I have a google drive for my company, and I would like to backup all theses data (mainly documents...) every days (in case of sb would delete them accidentally)... Is there an easy solution to do that ? I mean to copy all theses data, automatically to another google drive account or to my local disc ?
Thanks you,
|
Best way to backup (automatically, cron) google drive docs
|
0
I am a technology consultant for Symantec. And I have a lot of customers who are using java code for creating applications. If you add exceptions for the application and path to Symantec AV, you will be able to work hassle-free.
Share
Follow
edited May 22, 2017 at 16:46
Matthew Strawbridge
20.3k1010 gold badges7474 silver badges9494 bronze badges
answered May 22, 2017 at 16:20
pnrsrmskpnrsrmsk
1
Add a comment
|
|
we have java code creating, writing and deleting new files to the disk on windows, but the file operation failed sporadically;
sometimes files are created/deleted with a delay, sometimes it just failed
I suspect the antivirus or backup program caused this, and it happens more often with AVG, Symantec or Carbonite installed
anyone else runs into this problem as well?
any suggestions to work with those antivirus or backup program?
|
file writing failed sporadically with antivirus or backup program from windows
|
0
BACKUP DATABASE PLUS ARCHIVELOG;
Backup entire database along with archivelogs
BACKUP ARCHIVELOG ALL;
Backup archivelogs alone.
Share
Follow
answered Mar 10, 2017 at 7:28
Dineshkumar NatarajanDineshkumar Natarajan
933 bronze badges
Add a comment
|
|
What is the difference betweeen those two commands regarding archivelogs:
BACKUP DATABASE PLUS ARCHIVELOG;
and
BACKUP ARCHIVELOG ALL;
|
Difference between archivelog options in rman
|
0
.DateModified is not VBScript. Start reading here. There is DateDiff, but as Dates are Doubles under the hood, comparisons with < will work too. In code:
>> Set f = CreateObject("Scripting.FileSystemObject").GetFile(WScript.ScriptFullName)
>> dlm = f.DateLastModified
>> WScript.Echo TypeName(dlm), dlm, "(german locale)"
>> dlmn = DateAdd("s", 2, dlm)
>> WScript.Echo TypeName(dlmn), dlmn, "(german locale)"
>> WScript.Echo DateDiff("s", dlmn, dlm), DateDiff("s", dlm, dlmn), CStr(dlm < dlmn)
>> WScript.Echo CDbl(dlm)
>> WScript.Echo CDbl(dlmn)
>>
Date 22.11.2013 13:09:53 (german locale)
Date 22.11.2013 13:09:55 (german locale)
-2 2 Wahr
41600,5485300926
41600,5485532407
Share
Follow
answered Mar 1, 2017 at 9:55
Ekkehard.HornerEkkehard.Horner
38.6k22 gold badges4848 silver badges9898 bronze badges
Add a comment
|
|
Good morning,
I'm a beginner in programming code, so i'm sorry if I do something wrong.
I've wrote a code in VBS for backup some files from a folder to another.
My problem it is to compare the files date in both folders and allow the copy only if the file is new or the date has been changed.
Here my code, someone can help me to find the problem please?
I have tried but it is not working
' Copy a Folder
'Const OverWriteFiles = False
Dim strSourceFolder, strDestFolder
strSourceFolder = "E:\test1"
strDestFolder = "C:\test1"
Set objFSO = CreateObject("Scripting.FileSystemObject")
objFSO.CopyFolder "strSourceFolder" , "strDestFolder"
For each file in StrSourceFolder
ReplaceIfNewer ("file, strDestFolder")
Next
Sub ReplaceIfNewer (SourceFile, DestFolder)
Dim DateModifiedSourceFile, DateModifiedDestFile
DateModifiedSourceFile = SourceFile.DateModified()
DateModifiedDestFile = DestFolder & "\" & SourceFile.DateModified()
If DateModifiedSourceFile < DateModifiedDestFile then
Copy SourceFile to SourceFolder Else
End If
' Verify that a Folder Exists
'Set objFSO = CreateObject("Scripting.FileSystemObject")
If objFSO.FolderExists("strDestFolder") Then
MsgBox "Backup Copy Done." & vbCrLf & (Day(Now) & "\" & Month(Now) & "\" & Year(Now)) , Vbinformation
Set objFolder = objFSO.GetFolder("strDestFolder")
Else
MsgBox "Folder does not exist." , vbCritical , "Folder does not exist."
End if
Thanks and Be patient !
|
Date Comparison and copy only new files
|
0
You could store that information outside of the cluster, like in Table Storage, and read it from there during restore.
edit:
I've been working on an open source project that will help make this simpler. Feedback & contributions welcome.
Share
Follow
edited Mar 3, 2017 at 11:03
answered Feb 27, 2017 at 10:16
LoekDLoekD
11.4k1818 silver badges2727 bronze badges
Add a comment
|
|
I have Backup and Restore implemented with service fabric.
My backups go into folders on azure with time stamp and the service name.
At the moment I just search the latest backup, but what if I want to restore to an older version?
I invoke the data loss using
await fabricClient.TestManager.StartPartitionDataLossAsync(operationId, partitionSelector, DataLossMode.FullDataLoss);
This triggers the data loss here.
protected override async Task<bool> OnDataLossAsync(RestoreContext restoreCtx, CancellationToken cancellationToken)
{
await this.SetupBackupManager(null);
try
{
string backupFolder = await this.backupManager.RestoreLatestBackupToTempLocation(cancellationToken);
RestoreDescription restoreRescription = new RestoreDescription(backupFolder, RestorePolicy.Force);
await restoreCtx.RestoreAsync(restoreRescription, cancellationToken);
DirectoryInfo tempRestoreDirectory = new DirectoryInfo(backupFolder);
tempRestoreDirectory.Delete(true);
return true;
}
catch (Exception e)
{
throw;
}
}
I have my code in my backup manager to handle it, but I can't think/find a way to pass anything to OnDataLossAsync.
RestoreLatestBackupToTempLocation finds the latest backup.
|
Service Fabric : Restore to a particular version?
|
The main issue with your code is that looping through all files in the directory with ls * without some sort of filter is a dangerous thing to do.
Instead, I've used for i in $(seq 9 -1 1) to loop through files from *_9 to *_1 to move them. This ensures we only move backup files, and nothing else that may have accidentally got into the backup directory.
Additionally, relying on the sequence number to be the 18th character in the filename is also destined to break. What happens if you want more than 10 backups in the future? With this design, you can change 9 to be any number you like, even if it's more than 2 digits.
Finally, I added a check before moving site_com_${DATE}.tar in case it doesn't exist.
#!/bin/bash
DATE=`date "+%Y%m%d"`
cd "/home/user/backup/com"
if [ -f "site_com_*_10.tar" ]
then
rm "site_com_*_10.tar"
fi
# Instead of wildcarding all files in the directory
# this method picks out only the expected files so non-backup
# files are not changed. The renumbering is also made easier
# this way.
# Loop through from 9 to 1 in descending order otherwise
# the same file will be moved on each iteration
for i in $(seq 9 -1 1)
do
# Find and expand the requested file
file=$(find . -maxdepth 1 -name "site_com_*_${i}.tar")
if [ -f "$file" ]
then
echo "$file"
# Create new file name
new_str=$((i + 1))
to_rename=${file%_${i}.tar}
mv "${file}" "${to_rename}_${new_str}.tar"
fi
done
# Check for latest backup file
# and only move it if it exists.
file=site_com_${DATE}.tar
if [ -f $file ]
then
filename=${file%.tar}
mv "${file}" "${filename}_1.tar"
fi
|
I was able to script the backup process, but I want to make an another script for my storage server for a basic file rotation.
What I want to make:
I want to store my files in my /home/user/backup folder. Only want to store the 10 most fresh backup files and name them like this:
site_foo_date_1.tar site_foo_date_2.tar ... site_foo_date_10.tar
site_foo_date_1.tar being the most recent backup file.
Past num10 the file will be deleted.
My incoming files from the other server are simply named like this: site_foo_date.tar
How can I do this?
I tried:
DATE=`date "+%Y%m%d"`
cd /home/user/backup/com
if [ -f site_com_*_10.tar ]
then
rm site_com_*_10.tar
fi
FILES=$(ls)
for file in $FILES
do
echo "$file"
if [ "$file" != "site_com_${DATE}.tar" ]
then
str_new=${file:18:1}
new_str=$((str_new + 1))
to_rename=${file::18}
mv "${file}" "$to_rename$new_str.tar"
fi
done
file=$(ls | grep site_com_${DATE}.tar)
filename=`echo "$file" | cut -d'.' -f1`
mv "${file}" "${filename}_1.tar"
|
shell backup script renaming
|
0
If I remember correctly, you need to set the max_allowed_packet in your my.cnf to a large enough value to accommodate the largest data blob in your dump file, and restart the MySQL server.
Then, you can use a restore command like this one :
mysql --max_allowed_packet=64M < your_dumpfile.sql
More info here :
[https://dev.mysql.com/doc/refman/5.6/en/server-system-variables.html#sysvar_max_allowed_packet]
Share
Follow
answered Feb 21, 2017 at 23:53
BlogBufferBlogBuffer
6633 bronze badges
1
Ah, right. I forgot to mention that, my bad. I already tried setting the max_allowed_packet to the maximum (1073741824) and added the same value to mysql but nothing changed. Error message stayed the same.
– mezzodrinker
Feb 23, 2017 at 18:46
Add a comment
|
|
I am moving a MySQL database from a now inaccessible server to a new one. The dump contains tables which in turn contain binary blobs, which seems to cause trouble with the MySQL command line client. When trying to restore the database, I get the following error:
ERROR at line 694: Unknown command '\''.
I inspected the line at which the error is occurring and found that it is a huge insert statement (approx. 900k characters in length) which seems to insert binary blobs into a table.
Now, I have found these two questions that seem to be connected to mine. However, both answers proved to not solve my issue. Adding --default-character-set=utf8 or even --default-caracter-set=latin1 didn't change anything and creating a dump with --hex-dump is not possible because the source database server is no longer accessible.
Is there any way how I can restore this backup via the MySQL command line client? If yes, what do I need to do?
Please let me know if you need any additional information.
Thanks in advance.
EDIT: I am using MySQL 5.6.35. Also, in addition to the attempts outlined above, I have already tried increasing the max_allowed_packet system variable to its maximum value - on both server and client - but to no avail.
|
Restoring a MySQL dump with binary blobs
|
0
#!/bin/bash
DEST_PATH=/Volumes/PrivateMain/Media
mkdir -p $DEST_PATH
SAVEIFS=$IFS
IFS=$(printf "\n\b")
for i in $(find "/Users" -iname "*.jpg")
do
FILENAME="$(basename $i)"
MD5="$(md5 -q $i)"
cp "$i" "$DEST_PATH/$MD5-$FILENAME"
done
IFS=$SAVEIFS
Thanks to all who helped out! However, due to the possibility of overwriting 2 of the same named files, I have edited the script to as below.
SAVEIFS=$IFS
IFS=$(printf "\n\b")
COUNTER=0;
for i in $(find "/Users" -iname "*.jpg");
do
BASE=`expr "$i" : '.*/\(.*\)\..*'`;
EXT=`expr "$i" : '.*/.*\.\(.*\)'`;
COUNTER=`expr $COUNTER + 1` ;
cp "$i" ""${tardir}"/"$x"/JPG/"$BASE"_"$COUNTER"."$EXT""
done
IFS=$SAVEIFS
Share
Follow
answered Feb 10, 2017 at 20:55
Noob0321Noob0321
111 bronze badge
1
I think you want cp "$i" "${tardir}/$x/JPG/${BASE}_$COUNTER.$EXT".
– chepner
Feb 10, 2017 at 21:19
Add a comment
|
|
Below is a snippet of a script I am working on for media backup. The script runs as expected when called from the Terminal command line. However, after wrapping the script with Platypus in to an App, the destination directory is created but the For Loop does not run and no media is copied to the destination folder. Anyone know what I am doing wrong here??
#!/bin/sh
DEST_PATH=/Volumes/MediaBackup
mkdir -p $DEST_PATH
SAVEIFS=$IFS
IFS=$(echo -en "\n\b")
for i in $(find "$PWD" -iname "*.jpg")
do
FILENAME="$(basename $i)"
MD5="$(md5 -q $i)"
cp "$i" "$DEST_PATH/$MD5-$FILENAME"
done
IFS=$SAVEIFS
Gentle dudes and/or ladies, THANKYOU! Below is the working script using your comments. Thanks for a quick turn around. Should've done this days ago.
#!/bin/bash
DEST_PATH=/Volumes/PrivateMain/Media
mkdir -p $DEST_PATH
SAVEIFS=$IFS
IFS=$(printf "\n\b")
for i in $(find "/Users" -iname "*.jpg")
do
FILENAME="$(basename $i)"
MD5="$(md5 -q $i)"
cp "$i" "$DEST_PATH/$MD5-$FILENAME"
done
IFS=$SAVEIFS
|
For Loop Breaks after wrapping with Platypus
|
0
Bidirectional Communication will be needed as Private Branch Exchange will not work without it. More over to transmit data from Client to Master server 2 way communication is needed. Firewall Plays a role in it but still Ports needs to be opened bidirectionally.
If you have any query please let me know.
Share
Follow
answered Sep 7, 2017 at 15:56
Hemil ShahHemil Shah
1122 bronze badges
Add a comment
|
|
Our backup guy asked me to open up a firewall ticket to open up connections from our terrestrial data center to AWS. He asked for ports 1556, 13782, 13724 to be opened up bi-directional between the backup server in our data center to the subnets in AWS.
My question is, why is he asking for bi-directional communication? Usually I open up the firewall from the source device to the destination and the firewall allows for bi-directional communication.
He claims that the communication can be initiated by either side. Is he right about that? Because if he's not, I'd like to save some work for both myself and the firewall team.
|
netbackup port openinings - does it require bi-directional communication?
|
here i have added that feature to your script:
usage:
./yourscript.sh 3200
script:
#!/bin/bash
# make sure you gave a number of seconds:
[ 0$1 -gt 0 ] || exit
while true; do
SOURCE="/var/www/my_web/load/"
BACKUP="/home/your_user/load/"
LBACKUP="/home/your_user/load/latest-full/"
DATE=$(date +%Y-%m-%d-%T)
DESTINATION="$BACKUP"/"$DATE"-diff/
rsync -av --compare-dest="$LBACKUP" "$SOURCE" "$DESTINATION"
cd "$DESTINATION"
find . -depth -type d -empty -delete
sleep $1
done
if you get an error like bash: ./yourscript.sh: Permission denied, then you need to do this once: chmod +x yourscript.sh to make the script executable.
to continue running in background even after you leave the terminal window:
nohup ./yourscript.sh 3200 &
to run in background on schedule even after restart:
use cron, e.g., Using crontab to execute script every minute and another every 24 hours
|
I have some problem with bash script. I need to add some content to it. My script need to run at a certain time but I don't know how to do that. It should work like this:
I have a variable then I assign a time like 3200s. When I run the program, then the script will create backups every 3200s but only if some files changed. What am I doing wrong?
!/bin/bash
SOURCE="/var/www/my_web/load/"
BACKUP="/home/your_user/load/"
LBACKUP="/home/your_user/load/latest-full/"
DATE=$(date +%Y-%m-%d-%T)
DESTINATION="$BACKUP"/"$DATE"-diff/
rsync -av --compare-dest="$LBACKUP" "$SOURCE" "$DESTINATION"
cd "$DESTINATION"
find . -depth -type d -empty -delete
|
Run bash script on schedule
|
0
We found that using rsync to copy the data from the old server to the new server fixed the issue. Apparently rsync whatever metadata was being looked at by the rsync program. Our first two attempts used scp and a USB stick to copy files. These methods did not work.
Share
Follow
answered Jan 31, 2017 at 15:57
BrentBrent
14322 silver badges66 bronze badges
Add a comment
|
|
Our current setup consists of about 100 remote sites that gather data and then once a week transfer that data to a local server using rsync over a cellular connection. The data is stored at the remote sites for 12 months before it is deleted. All remote sites have been operating for more than a year. (They have a year's worth of data but only send down a weeks worth of data at a time.)
Recently our server needed to be replaced. All of the data for the sites has been backed up and that data has been put onto the new server.
I did a test using one of the remote sites and manually forced a push of data using rsync to the new server. It worked, but instead of pushing just the new data, it pushed all of the data for the past year, even though the data already existed on the new server it was pushing to. Rsync appeared to not recognize that most of the files already existed on the new server. (After the test sync, there were not duplicate files on the server, so rsync either overwrote the files are re-wrote them with the same data as before.)
Here is my question: How can I get rsync to recognize that the files it is trying to push already exist on the new server and not push down files that are already on the server?
This may seem like a trivial question, because after rsync runs one time on each remote site, everything will have flushed out. However, because my connections are cellular and I pay for the data I use. Sending a years worth of data, while my cellular plans are based on sending a months worth of data will result is severe overages in data usage costing us a lot of money. I have until Sunday morning at 2 AM to figure something out, otherwise all of the remote sites will start to download all the data they have stored for the past year.
Any help is greatly appreciated.
|
rsyncing to a new destination, but files are already there. Can I rsync know not to resend the all the files?
|
To fully restore a MyIsam table without the .MYI (i.e. solely from the .frm and the .myd files), run:
REPAIR tableName USE_FRM;
To make a fast and compact backup of the structure+data, run:
FLUSH TABLES tableName WITH READ LOCK;
[make a copy of the .frm and .myd files. No need to copy the .myi]
UNLOCK TABLES
Copying the .myd file provides a much faster backup than using mysqldump, for large tables. I ran a quick test and a 6G table took 6 minutes to backup with mysqldump vs a 5 seconds direct copy. The mysqldump file was the combined size of the myd+myi files. The .myd file can/should be compressed (I use 7z).
Direct copy is one of the several backup methods discussed in MySQL's official documentation
|
Is it possible to backup the .MYD file only? (and rebuild the .MYI if/when there is a catastrophic failure)
I'd like to backup rather large tables offsite while minimizing bandwidth usage. Data is critical, index files (5G+) are not. The idea is to run regular backups of the .frm and .myd files and rebuild the indexes iff there were a catastrophic failure (i.e. local backups destroyed by fire or stolen).
Repair with .frm and .myd only gives me an error message. Is there an easy workaround?
|
Backup myIsam .MYD only
|
If possible, don't repeat yourself
:chooseYes
for %%a in ( Desktop Documents Favorites Pictured Downloads ) do (
robocopy "%userprofile%\%%a" "%driveLetter%\%%a" /E /COPYALL /ZB /MT:20 /XJ /R:2 /W:5
)
CLEANMGR /C: /SAGERUN:65535 /SETUP
TIMEOUT /T 1 /NOBREAK >NUL
DEFRAG /C /H /V /W
PAUSE
EXIT
note: the mkdir has been supressed as the robocopy command will create the target folder
|
I've made this script (minus all my ECHO for your readability) to backup certain user folders to an external device. It's working flawlessly, but I'm wondering if anyone has any ideas as to how I could simplify it (eg. more 'clever').
I'm new to this site and coding. Please bear with me!
All help appreciated.
@ECHO OFF
SET driveLetter=%~d0
:CHOOSE
SET /P CHOOSE=Are you sure you want to continue [Y/N]?
IF /I "%CHOOSE%" == "Y" GOTO :chooseYes
IF /I "%CHOOSE%" == "N" GOTO :chooseNo
GOTO :CHOOSE
:chooseYes
MKDIR %driveLetter%\Desktop
MKDIR %driveLetter%\Documents
MKDIR %driveLetter%\Favorites
MKDIR %driveLetter%\Pictures
MKDIR %driveLetter%\Downloads
TIMEOUT /T 1 /NOBREAK >NUL
ROBOCOPY %USERPROFILE%\Desktop\ %driveLetter%\Desktop /E /COPYALL /ZB /MT:20 /XJ /R:2 /W:5
ROBOCOPY %USERPROFILE%\Documents\ %driveLetter%\Documents /E /COPYALL /ZB /MT:20 /XJ /R:2 /W:5
ROBOCOPY %USERPROFILE%\Favorites\ %driveLetter%\Favorites /E /COPYALL /ZB /MT:20 /XJ /R:2 /W:5
ROBOCOPY %USERPROFILE%\Pictures\ %driveLetter%\Pictures /E /COPYALL /ZB /MT:20 /XJ /R:2 /W:5
ROBOCOPY %USERPROFILE%\Downloads\ %driveLetter%\Downloads /E /COPYALL /ZB /MT:20 /XJ /R:2 /W:5
CLEANMGR /C: /SAGERUN:65535 /SETUP
TIMEOUT /T 1 /NOBREAK >NUL
DEFRAG /C /H /V /W
PAUSE
EXIT
:chooseNo
TIMEOUT /T 3 /NOBREAK >NUL
Best regards.
|
Simplifying batch backup script
|
Yes. Access to Google Cloud Storage buckets and objects are controlled by ACLs that allow you to specify individual users, service accounts, groups, or project role.
You can add users to any existing object through the UI, the gsutil command-line utility, or via any of the APIs.
If you want to grant one specific user the ability to write objects into project X, you need only specify the user's email:
$> gsutil acl ch -u [email protected]:W gs://bucket-in-project-x
If you want to say that every member of the project my-project is permitted to write into some bucket in a different project, you can do that as well:
$> gsutil acl ch -p members-my-project:W gs://bucket-in-project-x
The "-u" means user, "-p" means 'project'. User names are just email addresses. Project names are the strings "owners-", "viewers-", or "editors-" and then the project's ID. The ":W" bit at the end means "WRITE" permission. You could also use O or R or OWNER or READ or WRITE instead.
You can find out more by reading the help page: $> gsutil help acl ch
|
A question about Google Storage:
Is it possible to give r/o access to a (not world-accessible) storage bucket to a user from another Google project?
If yes, how?
I want it to backup data to another Google project, for the case if somebody may incidentally delete all storage buckets from our project.
|
Sharing data between several Google projects
|
So after some research, I found the cause of the error message. The proplem came from within the Virtual Machine itself. The VM or the Operating System was not configured, so Wbadmin would not accept the destination of //localhost/NetworkShare
When I tried backing up to a real network drive, everything worked as planned. The * wildcard, hoping to grab only the 6 testFiles numbered 1-6, worked correctly. However in real practice listing each individual file name after a comma: , will probably be more useful for others. Here is the command that worked:
wbadmin start backup -backuptarget:\\(IP address of network)\Public -inlcude:C:\!Test\testFile*
Here was the log report:
Backed up C:\
Backed up C:\!Test\
Backed up C:\!Test\testFile1.txt
Backed up C:\!Test\testFile2.txt
Backed up C:\!Test\testFile3.txt
Backed up C:\!Test\testFile4.txt
Backed up C:\!Test\testFile5.txt
Backed up C:\!Test\testFile6.txt
I hope this helps someone else
|
I'm trying to setup and learn the Wbadmin command line prompts for making my own backups. I'm created a test on Server 2008 R2 in VMWare, I've created a separate B: drive for backups. I'm trying to target specific files, and I've created 6 testFile# .txt files in the C drive under the !Test folder.
The command that I've used is:
wbadmin start backup -backupTarget:\\localhost\NetworkShare -include:C:\!Test\testFile*
The process starts, but ends up crashing. Screenshot attached below. The logs for both the backup and the error are blank. The main error message is:
There was a failure in updating the backup for deleted items.
The requested operation could not be completed due to a file system limitation
What am I doing wrong? B: was formatted to NTFS, and I've followed the instructions exactly.
|
Wbadmin backup failed due to a file system limitation
|
for i in aws dynamodb list-tables | jq -r ''| grep 'QA*'| tr ',' ' ' | cut -d'"' -f2;
do
echo "======= Starting backup of $i date =========="
python dynamodump.py -m backup -r us-east-1 -s $i
done
The above script will work if you want to take multiple dynamoDB tables backup. prior running the script you have to download the jq: (https://stedolan.github.io/jq/download/)
|
Not able to download multiple dynamoDB tables by using dynamodump
$ python dynamodump.py -m backup -r us-east-1 -s 'DEV_*'
INFO:root:Found 0 table(s) in DynamoDB host to backup:
INFO:root:Backup of table(s) DEV_* completed!
But i'm able to download if i give single table name and "*" (download all DynamoDB tables).
I have followed this procedure which is in the below link:
https://github.com/bchew/dynamodump
Can anyone suggest me how to download multiple dynamoDB tables with the specific pattern (like QA_* / DEV_* / PROD_* / TEST_*)
|
Not able to download multiple dynamoDB tables by using dynamodump
|
If you'd like to prevent any downtime, you can have the purchase information stored in a queue, separate from the "main" server that you back up regularly and have a job that reads from that queue and store in your "main" server. Use a persistent queue that doesn't need to be backed up (as long as it is consumed as soon as possible) so it can be accepting data while your server is down.
Once your backup is done, the server can read whatever info there is stored in the queue and process it.
If you stop redirecting to PayPal 10 minutes before the backup, what would happen if a user has been redirected 11 minutes before the backup? What would happen if the backup took 2 minutes more than what you initially thought was a safe interval? Don't do that :)
|
I am working on a shopping website where users are redirected to paypal for payment.
On the server I have a scheduled backup task running once a month that last something like 15 minutes. During backup the website will be suspended.
However, if a user has just been redirected to paypal before my server is suspended then there is the risk of a user making a payment but the purchased items will not be stored in the DB since my server is suspended.
What are the options to handle this situation? Should I write a little PHP to prevent purchases already 10 minutes before server backup? Are there any other common options? Thanks!
|
Prevent Payments during server backup
|
You aren't usually provided homework/tasks for which you have not previously been provided sufficient information. When you are the intention is usually that you actually put in some time and effort researching.
For that reason I will only provide this. You can put in your own time and effort to look up the commands and work out how it works:
@Echo Off
Set/P "SrcDir=Enter Source Folder: "
If Not Exist "%SrcDir%\" Exit/B
Set/P "DstDir=Enter Destination Folder: "
ROBOCOPY "%SrcDir%" "%DstDir%" *.c *.txt *.jpg *.csv /S
|
I need a help with mine school project scipt. I thought it would be easy, but apparently found myself a bit confused with it.
The task is to:
Write a script, which gets as parameters two directories. First directory must exist. From the first directory and its subfolders the backup will be done for files such as .c,.txt,.jpg,.csv... and these files will be backed up to the second directory, which is nonexistent or is empty.
I figured out just the copying part...
@echo
if %username%==administrator goto useradmin
rem # files with C
XCOPY "%USERPROFILE%\Documents\iT universe city\Source Folder\*.c" "%USERPROFILE%\Desktop\jpg\" /D /I /S /Y
rem # files with TXT
XCOPY "%USERPROFILE%\Documents\iT universe city\Source Folder\*.txt" "%USERPROFILE%\Desktop\jpg\" /D /I /S /Y
rem # files with JPG
XCOPY "%USERPROFILE%\Documents\iT universe city\Source Folder\*.jpg" "%USERPROFILE%\Desktop\jpg\" /D /I /S /Y
rem # files with CSV
XCOPY "%USERPROFILE%\Documents\iT universe city\Source Folder\*.csv" "%USERPROFILE%\Desktop\jpg\" /D /I /S /Y
|
Windows script to backup specific files from directory to another on same machine
|
0
Well, this is not the answer you might be looking for but I would use GIT to track down the changes, or may be even git-annex if the files are too big for example.
Initialize the git repository in the directory you want to track: git --init
tell git to track all files: git add .
commit the changes: git commit -a -m "initial commit"
after 24 hours make git diff to see the changes
Share
Follow
answered Dec 16, 2016 at 21:41
mestiamestia
46022 silver badges88 bronze badges
Add a comment
|
|
Is it possible to take some kind of "dump" of a directory on a Linux (Ubuntu) server that I can later use to compare against for new/modified files?
The idea being something like this:
Dump directory data (like file hashes)
24 hours later I take another dump and compare against #1 to find new or modified files
|
dump directory data to a file for new/modified comparison later on a linux server
|
0
I'm guessing mongodb still needs to continue saving data while it's running, but from what you say it seems safe to backup your data as nothing is changing your collection data.
However if you're unsure you could always db.shutdownServer() which will force a flush to disk and stop the service, then back it up then start the mongod process again.
Share
Follow
answered Dec 8, 2016 at 13:49
Kevin SmithKevin Smith
14.1k44 gold badges5959 silver badges8282 bronze badges
2
Since this is a file system backup I concerned about the changing of any files. If I find no resolution to my issue stopping the server seems the only way. Not want I want. This would generate alert emails on our system.
– Roland
Dec 8, 2016 at 15:15
db.shutdownServer() will only shutdown the mongod service, but agree'd it isn't ideal getting false alerts from monitoring
– Kevin Smith
Dec 8, 2016 at 15:19
Add a comment
|
|
When backing up the mongo file system using tar, using a secondary in a replication set, tar is saying files have changed during the tar process even though the lock command has been run. For reliable backups this should not happen. What I am missing?
devtest:SECONDARY> use admin
switched to db admin
devtest:SECONDARY> db.fsyncLock()
{
"info" : "now locked against writes, use db.fsyncUnlock() to unlock",
"seeAlso" : "http://dochub.mongodb.org/core/fsynccommand",
"ok" : 1
}
Using the find command looking for changed files while the tar process is running confirms this. Comparing before and after versions of these files with diff also confirms. It appears to always be these files.
/var/lib/mongo # find -cmin 1
.
./WiredTiger.turtle
./WiredTiger.wt
./diagnostic.data
./diagnostic.data/metrics.interim
Using Mongo 3.2 and wiredtiger configured.
/etc/mongo.conf
storage:
directoryPerDB: true
dbPath: /var/lib/mongo
engine: "wiredTiger"
wiredTiger:
engineConfig:
directoryForIndexes: true
collectionConfig:
blockCompressor: snappy
journal:
enabled: true
Documentation seems to imply files will not be changed. Maybe only "data" files will not change...
https://docs.mongodb.com/v3.2/reference/method/db.fsyncLock/
Changed in version 3.2: db.fsyncLock() can ensure that the data files do not change for MongoDB instances using either the MMAPv1 or the WiredTiger storage engines, thus providing consistency for the purposes of creating backups.
In previous MongoDB versions, db.fsyncLock() cannot guarantee a consistent set of files for low-level backups (e.g. via file copy cp, scp, tar) for WiredTiger.
|
mongo (3.2): Backing up with fsyncLock() files still being modified on the filesystem
|
0
You might try to use the Retrieving all contacts from the Google Contacts API:
To retrieve all of a user's contacts, send an authorized GET request to the following URL:
https://www.google.com/m8/feeds/contacts/{userEmail}/full
With the appropriate value in place of userEmail.
However the other two you mentioned is not supported by the API.
Share
Follow
answered Dec 1, 2016 at 15:46
ReyAnthonyRenaciaReyAnthonyRenacia
17.4k55 gold badges3838 silver badges5858 bronze badges
Add a comment
|
|
i want to get the backup of the contact files which resides in sim or google account or in iPhone, according to the users selection ... is it possible to do such thing...
|
How to Create vcf file of the contact which are in sim or in google account iPhone objective c
|
Yes, it can be done using a PowerShell or batch-file script (cmd tag seems to imply Windows OS).
Let's choose the latter. Next batch-file code snippet would do the same as the command in question: XCOPY G:*.BMP X:\ /h/i/c/k/e/y/r/d:
set "DriveIn=G"
set "DriveOu=X"
XCOPY %DriveIn%:*.BMP %DriveOu%:\ /h/i/c/k/e/y/r/d
Instead of hard-coded DriveIn and DriveOu, you can prompt for user input:
set /P "DriveIn=please choose SOURCE drive letter "
set /P "DriveOu=please choose TARGET drive letter "
XCOPY %DriveIn%:*.BMP %DriveOu%:\ /h/i/c/k/e/y/r/d
Hints for (necessary!) validity checks:
:dfromscratch
set "DriveIn="
set "DriveOu="
:dsource
set /P "DriveIn=please choose SOURCE drive letter "
rem basic validity check
if not defined DriveIn goto :dsource
if not exist "%DriveIn%:\" goto :dsource
:dtarget
set /P "DriveOu=please choose TARGET drive letter "
rem basic validity check
if not defined DriveOu goto :dtarget
if not exist "%DriveOu%:\" goto :dtarget
if /I "%DriveIn%"=="%DriveOu%" goto :dfromscratch
rem involve more validity check here!!!
XCOPY %DriveIn%:*.BMP %DriveOu%:\ /h/i/c/k/e/y/r/d
Some hints for (more) validity checks.
To show available disk drives:
wmic logicaldisk get Description, DeviceID, DriveType, FileSystem, VolumeName
To get a list of available disk drives, use for /F loop (note batch-file0 in a batch script):
batch-file1
or next oneliner (note batch-file2 in batch-file3): copy&paste into an open batch-file4 window:
batch-file5
Another approach (only a draft, needs more elaboration): build a list of available drive letters batch-file6 and use batch-file7 command instead of batch-file8:
batch-file9
|
What I am trying to do is a backup by cmd commands but the problem is when I am taking the USB backup to another PC to back up the drive names are different.
For example when I do:
XCOPY G:\*.BMP X:\ /h/i/c/k/e/y/r/d
In the other computer the drives will not be G and X.
What I am seeking to do is if it is possible to make a program that I can enter with the keyboard what drive I want to backup and to what drive.
For example:
XCOPY "driver name keyboard input":\*.BMP "driver name keyboard input":/" /h/i/c/k/e/y/r/d
|
Enter drives for backup via keyboard
|
0
As joop explained, your SQL file is inconsistent.
There is a foreign key constraint from raffle.user_id to "user".id, which means that for every value in raffle.user_id there must be a row in "user" where id has the same value.
Now there is no row inserted in "user" with an id equal to 1, but the script attempts to insert a row in raffle with user_id equal to 1.
That violates the foreign key constraint an causes an error. Once there has been an error in a PostgreSQL transaction, all you can do is ROLLBACK. Until you do that, all statements in the transaction will fail with the error you observe.
The only solutions you have are either to fix the data so that they are consistent or to give up consistency by removing the foreign key constraint.
Remark: it is a bad idea to choose a reserved SQL keyword like "user".id0 as name.
Share
Follow
answered Nov 23, 2016 at 8:56
Laurenz AlbeLaurenz Albe
225k1818 gold badges234234 silver badges303303 bronze badges
4
what is the foregin key constrain that i must remove
– riech
Nov 23, 2016 at 9:29
"user".id1
– Laurenz Albe
Nov 23, 2016 at 9:55
1
You must catch the first error and fix that, then the others will go away.
– Laurenz Albe
Nov 23, 2016 at 10:09
The one that you receive first. The one on top of the output. Capture the output to a file to make it easier.
– Laurenz Albe
Nov 23, 2016 at 11:28
Add a comment
|
|
I'm having a hard time understanding what this error means. The command I used was:
psql -U postgres -d app -1 -f postgres.sql
and this is the error:
psql:postgres.sql:1879: ERROR: current transaction is aborted, commands ignored
until end of transaction block
ROLLBACK
psql:postgres.sql:0: WARNING: there is no transaction in progress
Not really sure how to make a transaction in progress. This is the sql file that I was trying to import to postgresl: http://pastebin.com/2xMGhstd
|
psql error for restoring pgsl backup on cmd
|
0
It is related to ploop version, which used in creation of ploop device and its TopSnapshot.
So, we need to update ploop to recreate its TopSnapshot.
For example:
online openvz6:
vzctl set $VEID --diskspace $SIZE --save
offline ploop:
ploop resize -s $SIZE DiskDescriptor.xml
Share
Follow
answered Oct 12, 2016 at 7:31
m_messiahm_messiah
2,17422 gold badges1616 silver badges1313 bronze badges
Add a comment
|
|
When I try to make snapshot of my ploop container I have an error:
# vzctl snapshot $VEID --skip-suspend
Creating snapshot {6ea44de0-68ff-4044-9264-3dc7e818200d}
Storing /vz/private/ploop/$VEID/Snapshots.xml.tmp
Error in is_old_snapshot_format (snapshot.c:39): Snapshot is in old format
Failed to create snapshot: Error in is_old_snapshot_format (snapshot.c:39): Snapshot is in old format [38]
Failed to create snapshot
But there are no any snapshots:
# vzctl snapshot-list 2045
PARENT_UUID C UUID DATE NAME
So, I have found just one same question about this problem, but is not answered.
I think, it is related to updated vzctl and ploop, while container created earlier.
|
Ploop snapshot is in old format
|
0
You can move files directly to an archive using the rar m command:
rar m C:\xml\UPLOADED\in10xml_uploaded_%date%_%time%.rar C:\xml\UPLOADING\*.xml
After the above command completes, the files will no longer be in the UPLOADING directory.
cmd.exe doesn't have any facilities for formatting dates. You can get the date format you want by using substrings:
echo %date% (check the current format, e.g. dd/mm/yyyy)
echo %date:~6,4%%date:~3,2%%date:~0,2% (yyyymmdd)
But be careful: if you change your Region Settings in the Control Panel, you'll need to change this batch script to accommodate the new date format.
Share
Follow
answered Oct 13, 2016 at 14:03
Klitos KyriacouKlitos Kyriacou
11.1k22 gold badges4141 silver badges7878 bronze badges
Add a comment
|
|
I need to back up some files processed. For this need to move my files from the C:\xml\UPLOADING to the C:\xml\UPLOADED. Files that have been moved to C:\xml\UPLOADED have to be compressed (.rar or .zip) to a folder with the default name in10xml_uploaded_YYYYMMDD_HHMMSS. For this did the following command:
cd "C:\program files\WinRar"
rar a C:\xml\UPLOADED\in10xml_uploaded_%date%_%time%.rar C:\xml\UPLOADING\*.xml
The command is not working the way I need, because I need to move the files from the C:\xml\UPLOADING and do a copy (the above command is making a copy) of files in this directory leading to the C:\xml\UPLOADED already compressed in in10xml_uploaded_YYYYMMDD_HHMMSS format. The date and time is also not out in the format I want. How do I solve these problems?
|
Prompt Windows - Script Back up
|
Can I just backup my replica VMs and avoid unnecessary file transfer between servers?
You can backup replica VMs , but the result might be not as you expected .
Because :
"
-Only crash-consistent backup of a Replica VM is guaranteed.
-A robust retry mechanism needs to be configured in the backup product to deal with failures. Or ensure that replication is paused when backup is scheduled.
"
This artilce may guide you in the right direction .
|
I backup my Hyper-V machines on my main server periodically. I have also turned on replication on different machine where I have also more storage space so my backup VM images go to this secondary machine.
Question is - when I backup replica will I have problems with restoration of VM if I would like to restore replica VM on main hyper-v server?
Can I just backup my replica VMs and avoid unnecessary file transfer between servers?
|
Can I backup Hyper-V replica Virtual Machines instead of main VMs?
|
Problem solved:
Since OSX is hiding some sepcial files like the .htaccess, i had to make them visible so i could copy them.
Now everything is working as it shall!
|
My RealURL path segments do not work anymore since a Backup.
I had TYPO3 7.6.10 on my Windows PC.
Then i installed TYPO3 7.6.11 on my new Mac.
I made a dump file of the database and copied all files of my TYPO3 Project.
After finishing, I could successfully login into the backend.
The only problem I have is, that my RealURL does not rewrite my paths anymore.
Actually my first page is called localhost/project/home/ instead of localhost/project/index.php?id=2.
However, the first one always ends up in a 404 - Error.
I don't know why that happens, since i also copied the _.htacces file in the project folder too. Or is that not the right way to back up?
Hope someone can help me.
EDIT
Problem solved: Since OSX is hiding some sepcial files like the .htaccess, i had to make them visible so i could copy them. Now everything is working as it shall!
|
TYPO3 RealURL does not work after Backup
|
Start WinRAR and click in menu Help on Help topics. On tab Contents open list item Command line mode. Read first the help page Command line syntax.
Next open sublist item Switches and click on item Alphabetic switches list. While reading the list of available switches for GUI version of WinRAR build the command line.
For example:
"%ProgramFiles(x86)%\WinRAR\WinRAR.exe" a -ac -agYYYY-MM-DD -cfg- -ep1 -ibck -inul E:\Backup C:\NeedBackup\
Note 1: Switch -inul implicitly enables -y which is not documented but I know from author of WinRAR and my own tests.
You might use also the switch -dh although I recommend not using it for this backup operation.
By using additionally switch -ao the created backup archive would contain only files with having currently archive attribute set. This means only files added/modified since last backup are added to new archive because of usage of switch -ac in previous backup operation, i.e. creating incremental backup archives instead of always complete backups.
Well, the switch -df could be also used instead of -ac and -ao to create incremental backups. WinRAR deletes only files which could be 100% successfully compressed into the archive.
For details on those switches read their help pages.
Note 2: The command line creates a RAR archive file. For a ZIP file you would need additionally the switch -afzip.
Note 3: 7-Zip has also a help file explaining in detail also the usage from command line with all available commands and switches.
|
I have a batch file to copy data between 2 Disk below:
"C:\Program Files (x86)\WinRAR\WinRAR.exe" a -ag E:\Backup C:\NeedBackup -ms
Maybe use Winrar or 7-zip but they cannot copy folder with Deny for all permission. I want to skip that folder and continue to copy other files.
Anyone help me???
|
How to skip "Access Denied" Folder when zip folder with command-line?
|
That's what you get from messing with system catalogs.
The simple and correct answer is “restore from a backup”, but something tells me that that's not the answer you were looking for.
You could drop the type that belongs to the table, all indexes on the table, all constraints, toast tables and so on, but you'd probably forget to drop something or drop something you shouldn't and end up with a bigger mess than before.
Moreover, the table file would be left behind and it would be hard to identify and delete it.
It would be appealing to try and recreate the pg_class row that you dropped, but you wouldn't be able to create it with the correct oid since you cannot directly insert a certain oid or update that column.
You could dump the whole database cluster with pg_dumpall, create a new cluster with initdb and restore the backup there, but this might fail because of the data inconsistencies.
Really, the best thing is to restore a backup.
|
I accidentally dropped a table from pg_class and I have the same table present in a different server inside a schema. How do I restore it?
I have tried this
psql -U {user-name} -d {desintation_db} -f {dumpfilename.sql}`
This is what i'm getting -
ERROR: type "food_ingredients" already exists`
HINT: A relation has an associated type of the same name, so you must use a name that doesn't conflict with any existing type.`
ERROR: relation "food_ingredients" does not exist`
ERROR: syntax error at or near "19411"`
LINE 1: 19411 10405 2074 45.3333333333 0.17550085492131515 NULL NULL...`
ERROR: relation "food_ingredients" does not exist`
food_ingredients is the table which I dropped from the pg_class.
|
If a table is dropped from pg_class accidentally then how to restore it from backup?
|
0
Try to copy only the files of user1.
find publicdir -user user1` -exec cp {} somedir \;
When you have used cp -p you can still remove the files using
find user1dir ! -user user1 -exec rm {} \;
Share
Follow
answered Sep 16, 2016 at 21:39
Walter AWalter A
19.5k22 gold badges2424 silver badges4444 bronze badges
4
find publicdir -user user1` -exec cp {} somedir \; only copies the files but i need the directorys intact to. like: parentdir ----------->shared document dir -------------------->somefiles --------------------->importantdoc dir ----------------------> some more files And if the user doesnt own any files in a directory that dir shouldnt be copied. I was thinking that i can copy the whole publicdir and then remove all files that doesnt belong to user1. And then check and remove empty dir. Is that possible or is it a more basic way to do it?
– DragonRapide
Sep 17, 2016 at 14:16
When you want to keep the dir-structure and not afraid for the overhead copying all files, you can do a cp -rp followed by removing files that ! -user user1. Cleaning up empty dirs can be done with unix.stackexchange.com/q/8430/57293.
– Walter A
Sep 17, 2016 at 17:45
so I ran sudo find /publicdir -user user1 -exec cp -rp {} /backup And it copys all the the files that user1 own and the directories he have created in the public dir. But it doesnt copy the parent directories for the files that is in other users directories, is this possible?
– DragonRapide
Sep 21, 2016 at 17:15
Why not try it?
– Walter A
Sep 21, 2016 at 18:43
Add a comment
|
|
Hi I want help with a rm command that can remove all files and folders not containing any files created by a specific user
so say i copy a "public" folder where lots of users store there files and this "user1" wants a copy of all his files and folders (not the empty folders)
|
Delete files and folders not containing file ownd by a user
|
0
As suggested by @Cyrus and using shellcheck, you keep the following error
To assign the output of a command, use var=$(cmd)
Then you get some errors to correct and here a working script
FILENAME=user_archive.tar
DESDIR=/home/user
FILES=$(find /shared -type d -user user)
tar -jcvf $DESDIR/$FILENAME $FILES
Share
Follow
answered Sep 17, 2016 at 15:14
FlowsFlows
3,74333 gold badges2929 silver badges5454 bronze badges
Add a comment
|
|
This question already has answers here:
How do I set a variable to the output of a command in Bash?
(16 answers)
Closed 7 years ago.
I am trying to learn scripting in Ubuntu.
I need to backup files created by a specific user in a folder where other users store there files. It needs to be compressed into a tar file with the file tree intact.
Edit: How do I find those files created by a user and then compressing them into a tar file with all the direcories and subdirectories
FILENAME=user_archive.tar
DESDIR=/home/user
FILES=find /shared -type d -user user * tar -rvf $DESDIR/$FILENAME
tar -jcvf $DESDIR/$FILENAME
|
Unix backup script [duplicate]
|
You must open the Visual Studio as Administrator to do this.
then, go to the Project propieties and click in view Windows settings. You should see somthing like this:
<requestedExecutionLevel level="asInvoker" uiAccess="false" />
Your should change this statement to:
<requestedExecutionLevel level="requireAdministrator" uiAccess="false" />
Close all, now Start as Administrator. The problem is that you cant write in C:/ dir without Administrator privileges.
|
I have a question. Why is it i cannot backup my database in drive c using vb.net?
This is the messagebox error:
My Stored Procedure that i will execute in vb.net
BACKUP DATABASE DatabaseNameTest TO DISK = '\\DSA02\Users\DSA_02\Source\Sample.BAK'
But if i try this to another drive like:
BACKUP DATABASE DatabaseNameTest TO DISK = '\\DSA02\DSA_02\Source\Sample.BAK'
It will not cause an error.
So why is it in drive c my code return error while in another drive it does not.
Can any one solve or help me with this problem?
---------- Solution I've found ----------
Note: Not sure if this is the best answer/solution but now i can save my backup file in drive c.
I configure the Folder in drive C User where it will be save. Example: C:\User\Destination I configure the Destination Folder
Steps:
Properties->Security Tab->Edit Button->Click Add->Enter object name like "Guest"->Click Check Names->Then OK->Then Click the newly added CompName/Guest field in "Group or Username"->Then Check All the Permission of Guest Except for Deny->Then OK. Then this will now allow you to save file in drive C to its Destination Folder No need to run your program as administrator.
|
Backup database in drive C cause an error
|
You can read the current running schema and config through the Solr schema API and Solr config API.
Pay attention: the results of this APIs is not the original schema.xml or solrconfig.xml files but from that you can rebuild the originals.
Again, pay also attention that Solr config API is available only in recent version of Solr.
In older versions (I have tested version 4.8.1) are no API for the solr configuration, so there is no way to fully rebuild the solrconfig.xml file.
|
I have a single core solr server. when solr was running, in one collection solrconfig.xml and schema.xml files replaced by mistake.
now collection worked correctly and correctly response to request but valid file in conf folder is replaced by mistake files. surly if i reload collection, new bad files load and my collection not worked correctly.
is there a way than can get solrconfig.xml & schema.xml from running collection without considering solrconfig.xml and schema.xml files that exist in conf folder?
|
how to backup solrconfig file from running solr
|
17
You can use WinSCP, it supports both scripting and TLS/SSL.
See automating file transfers to FTP server.
A simple batch file to download files over an explicit TLS/SSL (note the ftpes://) with WinSCP looks like:
winscp.com /log=c:\path\ftp.log /command ^
"open ftpes://user:[email protected]/" ^
"get /home/user/* c:\destination\" ^
"exit"
You can have the batch file generated by WinSCP GUI for you.
For scheduling, simply use the Windows Scheduler.
For details see scheduling file transfers to FTP server.
(I'm the author of WinSCP)
Similarly for an upload: Schedule an automatic FTP upload on Windows with WinSCP
Share
Follow
edited Aug 23, 2022 at 18:43
answered Jul 25, 2014 at 7:29
Martin PrikrylMartin Prikryl
195k6161 gold badges513513 silver badges1k1k bronze badges
1
4
Thanks for authoring WinSCP. It's a great, FREE tool!!
– Beachhouse
Jun 17, 2015 at 14:08
Add a comment
|
|
I need to connect to a host with username, password, implicit TLS encryption and port number to download files to a folder daily on windows server standard. Is there a third party command-line application that I could download, install and use for this (preferably free)? I'm not absolutely sure if this could be done with Windows ftp and if it can, could it be done in batch file?
I am trying NcFTP but I'm not sure if it supports encryption either.
I was given specific credentials, I have no control over the server. I have only instructions on how to access and download the files with FileZilla client over TLS. I need to schedule a routine that does this job for me since I don't want to manually do this every day. I can manage myself on this I only need a tool that could do this job over command-line.
|
Automatic download via ftp [duplicate]
|
Yes, it works without any issues on Azure Windows VM just the way it works on on-premises VM. If you want to backup a specific folder only, use the article you mentioned.
|
I've set up a Windows Server 2012 R2 Azure virtual machine with SQL Server Web Edition.
I've set up a recovery services vault used to fully backup the Virtual Machine once a week, to be able to restore the installed software.
In SQL Server Management Studio, I've set up a Maintenance Plan that backs up the DB to a specific local folder in the virtual machine.
Now, I would like to back up this local folder to another location / storage in Azure, being able to restore this folder in case of need.
What's the best way to backup a single folder on a daily basis?
Should I follow this guide to "Back up a Windows Server or client to Azure using the Resource Manager deployment model"? Does the "Vault credential file and backup agent" works on a server in Azure too?
Are there any suggestions to perform the described plan?
Thank you very much!
Fabio
|
Backup single folder from Azure VM
|
To copy/sync files from S3 to a local folder (synced by Dropbox) use the AWS CLI.
AWS CLI: https://aws.amazon.com/cli/
AWS CLI S3 Copy Folder: http://docs.aws.amazon.com/cli/latest/reference/s3/cp.html
AWS CLI S3 Sync Folder: http://docs.aws.amazon.com/cli/latest/reference/s3/sync.html
If you just need to backup S3 to another cloud provider, not necessarily to Dropbox, you should have a look at Google Cloud Storage and their sync service.
Google Cloud Storage: https://cloud.google.com/storage/
Google Cloud Storage Transfer Service: https://cloud.google.com/storage/transfer/
|
Several weeks ago, I had the answer to my question: Install Dropbox for Linux command line, use Andrea Fabrizi's great Dropbox-Uploader script, and finish up with mover.io (which I used to move the files from the app folder, which is not shareable, to a shareable folder).
Sadly, as of the middle of August, mover.io is no longer free. We would prefer not having to pay for their service (especially since we are only copying around 3 MB/day), but I have not yet found an approach that works.
My question: Can you point me to either:
a freeware, scriptable approach for copying files from AWS to a shared Dropbox folder?
a freeware alternative to mover.io that provides scheduled copying of files from a Dropbox app folder to a Dropbox shared folder?
Thank you.
|
Backing up AWS to Dropbox
|
0
Just in case someone else had the same problem:
Using the mysqldump command instead of the one I posted in my question worked fine:
$ mysqldump -uroot -ppass --all-databases > databases.sql
However, because of MySQL versions on both computers I also had to go through this other problem:
MySQL unknown column 'password_last_changed'
And finally now I have everything working again.
Share
Follow
edited May 23, 2017 at 10:29
CommunityBot
111 silver badge
answered Sep 4, 2016 at 2:47
Alberto MartínAlberto Martín
4491313 silver badges2727 bronze badges
Add a comment
|
|
I'm trying to move a MySQL DB from version 14.14 Distrib 5.5.50 to another machine with 14.14 Distrib 5.7.13 (both machines are Ubuntu, 14.04 and 16.04 respectively)-
I've always managed to do it with these commands:
1) Backing up on origin-computer:
Users:
$ MYSQL_CONN="-uroot -ppassword"
$ mysql ${MYSQL_CONN} --skip-column-names -A -e"SELECT CONCAT('SHOW GRANTS FOR ''',user,'''@''',host,''';') FROM mysql.user WHERE user<>''" | mysql ${MYSQL_CONN} --skip-column-names -A | sed 's/$/;/g' > ${TEMP_DIR}"/"${MYSQL_DIR_NAME}"/users.sql"
Databases:
$ mysqldump -uroot -ppassword --all-databases > ${TEMP_DIR}"/"${MYSQL_DIR_NAME}"/databases_TMP.sql"
2) Restoring on destiny-computer:
Users:
$ mysql -uroot -ppassword < users.sql
Databases:
$ mysql -uroot -ppassword < databases_TMP.sql
And it has always worked for me until now.
This time, no matter the order in which I take these steps or any combination/modification on the parameters, it is not working and I can't figure out why it's not working.
Every time I finish the process, when I launch the MySQL Workbench and click any user, inmmediately I get this error message:
"Unhandled exception: object of type 'NoneType' has no len()"
I have no clue what can I do to solve it, so any idea will be really welcomed.
|
Error after moving MySQL DB to another computer (both Ubuntu, 14.04 and 16.04)
|
0
Here something quite simple for one sheet (you can adapt it for several sheets)
var source = SpreadsheetApp.getActiveSpreadsheet();
var data = source.getActiveSheet().getDataRange();
var cible = SpreadsheetApp.create(source.getName()+" backup");
cible.getActiveSheet().getRange(data.getA1Notation()).setValues(data.getValues());
Logger.log(cible.getId());
Logger.log(cible.getUrl());
Share
Follow
answered Sep 2, 2016 at 8:29
HaroldHarold
3,31711 gold badge1919 silver badges2727 bronze badges
Add a comment
|
|
I am new to google app scripts and I have been looking for a way to back up a sheet. I am currently using.
DriveApp.getFileById("146qFnrQoNPBcDhV6QB0bscHFp8TquXJoAC1qg_esy4E").makeCopy("DailyArchive" + Date() + " backup");
The problem is its making a daily backup and those backups are updating just like the original and I just want to make a backup of the values so I have a archive. In my sheet I am importing data from a jail roster. http://www.kitsapgov.com/sheriff/incustody/jailwebname.xml
|
How do I script making a backup copy of a spreadsheet and its values not its formulas to an archive folder?
|
As a commented line begin with this "#" string on your hosts file, so ,
you should use Find /V "#" to display all lines NOT containing the specified string "#"
For more help about Find /?
You can do something like this :
@echo off
Rem Batch script to copy uncommented entries of your hosts file
set "BackupHostsFile=%userprofile%\Desktop\BackupHostsFile.txt"
If Exist "%BackupHostsFile%" Del "%BackupHostsFile%"
set "hostspath=%windir%\System32\drivers\etc\hosts"
Rem Find /V "#" : To display all lines NOT containing the specified string "#"
for /f "delims=" %%a in ('Type "%hostspath%" ^| find /v "#"') Do (
If Not %%a=="" echo %%a >> "%BackupHostsFile%"
)
Start "" "%BackupHostsFile%"
|
I'm writing a batch script which does the backup.
It needs to make a copy of the "hosts" file with the following condition:
"if system "hosts" file contains any uncommented entries, then copy it".
Any ideas?
|
Batch script for backup hosts file
|
0
Are you getting this issues for your all account ?
I think it's not due to backup. Might be there is other cron which is changing the account files permission from 777 to 755.
You can test this by disabling backup cron for one day and check account's file permission.
Share
Follow
answered Aug 14, 2016 at 12:43
24x7servermanagement24x7servermanagement
2,53011 gold badge1414 silver badges1111 bronze badges
2
Yes it's for all of the accounts.
– Saleh Hashemi
Aug 15, 2016 at 17:03
What cronjob. I don't set any cronjob for that. Help me please
– Saleh Hashemi
Aug 15, 2016 at 17:04
Add a comment
|
|
I've configured auto legacy backup.
Every time it does a system backup, all the 777 permissions will change to something like 750 and it causes 500 server error.
Can anyone tell me how to stop this from happening. It resets the permissions every time I do a backup.
|
Cpanel file permissions change automatically after system backup
|
0
The only defining thing about an app in a device is the appId. So you can have two instances of the same app as long as they have different applicationId. You can simply change the applicationId property in build.gradle file to achieve this:
android {
...
defaultConfig {
applicationId "any.new.name"
}
...
}
Keep changing the appId for different versions
Share
Follow
answered Aug 9, 2016 at 18:23
ShaishavShaishav
5,31222 gold badges2222 silver badges4242 bronze badges
2
This is not relevant to my question. I want to make clone of the app installed on my real device with "all it's data". As I've mentioned, app's data will be the user's experience which may contain a week or month of data. Changing app id will install a fresh copy which I'm not interested in.
– mahyar android
Aug 9, 2016 at 20:00
@mahyarandroid OK..Well your question did say "for better debugging" and "make changes to my app and then update the app in my phone respectively"...Let me think over it...
– Shaishav
Aug 10, 2016 at 2:44
Add a comment
|
|
I've created an android app which tracks user experiences(for example mood) through days( So it uses databases) and I've installed it on my (real) phone. To truly assess my application I work with the application on my real phoine for some days to see how it works.
The problem is about upgrades : when I make changes to my app and then update the app in my phone respectively there is a great chance everything would be ruined. for example suppose that I mistakenly add a line to delete the database! So I want to clone(backup) the app on my real device before updates. Is it possible?
More clarification:
I want to make clone of the app installed on my real device with "all it's data". As I've mentioned, app's data will be the user's experience which may contain a week or month of data. Changing app id will install a fresh copy which I'm not interested in.
|
Clone(backup) an Android App with all its data on a real device for better debugging
|
0
Unable to login coz:
1. users are don't have a permission to access your database.
2. Check Sql service if running on you system Services.
3. if on server or another pc, Check server ip if you'd be able to ping it.(Network Connectivity).
Hope it helps.
Share
Follow
answered Aug 9, 2016 at 6:04
Vijunav VastivchVijunav Vastivch
4,18311 gold badge1616 silver badges3232 bronze badges
Add a comment
|
|
I have refreshed the development database manually from production database backup. After refreshing the database, the Dev team is unable to login and access to that database. Please let me know your thoughts to troubleshoot systematically to resolve this issue.
Thanks and regards
|
Users are unable to login to SQL Server dev database
|
For anyone who might bump into this: Typical n00b error on my part.
I had forgotten to update the rootfs path in my config file for the container.
As I was doing a restore test of an existing container, I had untar'ed my backup to another directory in /var/lib/lxc - e.g. /var/lib/lxc/restored - but hadn't updated the config in /var/lib/lxc/restored/config to point to the correct path.
This resulted in the container using the same rootfs as my original - still running - container. This the problems with Mysql.
Interesting to note that you can spin up two containers sharing the same rootfs. Maybe there's some applications for this "feature" somewhere.
LXC is awesome.
|
I've followed these simple instructions in order to backup and restore an LXC container:
https://stackoverflow.com/a/34194341
The backup and restore procedure go well. I've made triple sure I use the --numeric-owner flag when tar and untar'ing, and the container starts up fine. However, MySQL in the container barfs all over the place with the following errors, when doing service mysql restart (output from journalctl -xe):
[ERROR] InnoDB: Unable to lock ./ibdata1, error: 11
and
[ERROR] Plugin 'Aria' registration as a STORAGE ENGINE failed.
I can get it to start up if I delete the following files, so that mysql recreates them:
/var/lib/mysql/ibdata1
/var/lib/mysql/ib_logfile*
/var/lib/mysql/aria_log_control
"Solution" gleaned from https://bbs.archlinux.org/viewtopic.php?id=160277
But this royally messes up my site database.
What is going on here?
It seems to me that file permissions, or something along those lines have gone awry - but when I compare ownership and rights between the original, working container and my restored copy, it all looks identical.
|
Backup and restore LXC container with LAMP stack - MySQL cannot start in container
|
0
Executing MongoDB commands from Bash didn't really work, because you have to keep the connection open if you want to unlock the database again.
But when executing commands from Bash it connects to the database, executes the command and disconnects.
I ended up making a Javascript script and executing it from the Mongo Shell.
Share
Follow
answered Aug 10, 2016 at 5:59
Jakob KendaJakob Kenda
38811 gold badge33 silver badges1212 bronze badges
Add a comment
|
|
I have a shell script that backs up MongoDB database.
I have to lock the database before backing it up.
mongo --eval "db.fsyncLock();" works fine, but when I run mongo --eval "db.fsyncUnlock();" it just waits and does nothing.
How can I make unlocking work?
edit: I know I have to keep the connection open, but how?
|
MongoDB - making db.fsyncUnlock(); work
|
0
Have to leave for a meeting. I'll leave you the script I've been working on to help you:
#!/bin/sh
# in your case filename would be the variable used in the for loop
filename=$(find DATA-TT*)
# Part gets the date from the filename
part=$(echo $filename | grep -Eo '[[:digit:]]{8}')
echo "$filename -> $part"
# limit_date is current date -30 days
limit_date=$(date +'%Y%m%d' -d "now 30 days")
echo $cur_date
# this part compares and echoes, you can try it and replace echoes with ziping, moving, etc.
if [ $part -ge $limit_date ]
then
echo "part is older than 1 month"
else
echo "part had not yet reached 1 month age"
fi
Share
Follow
answered Aug 3, 2016 at 10:48
Alex LorinczAlex Lorincz
6944 bronze badges
Add a comment
|
|
Directory structure
MyDirectory
-data/
-DATA-TT_20160714_soe_test_testbill_52940_1.lst
-output/
-DATA-TT_20160714_soe_test_testbill_52940_1.pdf
-Backup/
enter code here
#!/bin/bash
for i in $( ls ); do
echo $i
#cd $i
#cd $i/data/
echo $i
cd $i/data/
echo $i/data/
$(ls *1.lst)
cd ../output
$(ls *1.pdf)
done
I need to navigate in the directories and the sub directories where
the
input and output files are kept. These file have
date in YYYYMMDD format which I need to compare with current date.
If the difference is greater than 1 month I need to
zip those file and move them to Backup directory. The "DATA-TT" part
is constant.
Can anyone help me in this.There may be many directories with same sub directories structure.Example MyDirectory1,MyDirectory2,MyDirectory3
|
navigating directories and sub-directories and moving the files using shell
|
0
You can use rsync for this case. Specify an --compare-dest so files which are different in the locations gets copied there.
First try an dry run to check your config:
rsync -aHxv --progress --dry-run --compare-dest=backup-change-files/ folder1/ folder2/
If all is good run it with:
rsync -aHxv --progress --compare-dest=backup-change-files/ folder1/ folder3/
Please try it with some copied data for testing first.
Share
Follow
answered Aug 3, 2016 at 6:23
blacksheep_2011blacksheep_2011
1,10333 gold badges1111 silver badges2222 bronze badges
Add a comment
|
|
While replacing the folder, the file which has been changed should get backup, then it should get replace.Is any script or command for this work in linux.
|
How to get backup of the file replacing that file using command
|
0
As its name means, rsync command is syncing files between remote and local. So from what you are describing , you want to backup files locally. So I think a crontab job with a shell script will satisfy your demands. A tar command may last sometime, but you can split your /var/www files into smaller files and use tar -g to back up your files increasingly.
As inconsistent problems, I think backing up is a snapshot for files at exactly one time. So at this time, backing up is backing up the current status. After that, some files' changes will be backed up at later time.
Share
Follow
edited Aug 2, 2016 at 14:23
answered Aug 2, 2016 at 14:18
shitoujizushitoujizu
6911 silver badge99 bronze badges
Add a comment
|
|
Say I am running an HTTP server with data at /var/www. I want to backup /var/www to /root/backup/.tmp/var/www daily automatically.
Mostly the backup is using rsync technique. The problem is that since the HTTP server is running, there could be file modification during an rsync backup process.
For an HTTP server a certain "transaction" could involve multiple files, e.g. modifying file A and B at once, and therefore such scenario is possible: rsync backups file A => a transaction occurs and file A and B are modified => rsync backups file B. This causes the backup-ed files to be inconsistent (A is before transaction while B is after transaction).
For an HTTP server shutting down for backup is not viable. Is there a way to avoid such inconsistent file backup?
|
Real time backup for a modifying directory (e.g. HTTP server)
|
0
Cassandra table-schema details will be stored as meta data in system keyspaces.
Cassandra snapshot just creates a copy of sstables for the requested keyspace/column_familes. So to restore the snapshot, you need to explicitly create the schema in destination cluster.
Share
Follow
answered Aug 4, 2016 at 9:38
IamAPIamAP
6644 bronze badges
Add a comment
|
|
I have snapshot backup from cassandra cluster 1, and need to restore the same on cassandra cluster 2. Is it possible to do so, without having schema?
|
cassandra snapshot restore on different cluster on missing schema
|
Do "adb pull /data" hoepfully that should work, if not, report it here :).
|
I have a Samsung Galaxy note (1) which is stuck in boot logo when I turn it on. I can access Android recovery mode, and have tried wiping cache data and do a normal boot, but it didn't work. I'm trying to avoid factory setting reset before getting hands on data inside and saving them. I've tried to backup through adb backup command but it didn't work. adb does not give me permission to access data on data folder. I've also tried to update firmware in download mode using kies but it does not have it for GT-N7000.
P.S.: I've got a new battery but still the same.
|
How to extract files from an unrooted android (GT-N7000) which is stuck in boot logo?
|
0
Unless you are talking about NodeJS, I do not think that this is possible. Javascript is a client-side language, not a server-side language.
Take a look at NodeJS and this library: https://github.com/nonolith/node-usb. That should put you on the right track.
Share
Follow
answered Jul 26, 2016 at 18:36
camrympscamrymps
32733 silver badges1111 bronze badges
4
Thanks for the link. I'm sure that it works though, I've actually found a malicious JS worm (Proslikefan) that does so. Here's a link - johannesbader.ch/2016/06/proslikefan
– user5946522
Jul 26, 2016 at 18:39
It looks like ActiveX is the key to making this work, at least in Javascript. ActiveX is how you access the local system. In this example, an Excel spreadsheet is modified and saved using Javascript's ActiveX object: msdn.microsoft.com/en-us/library/7sw4ddf8(v=vs.94).aspx
– camrymps
Jul 26, 2016 at 19:14
This link may also be of use: stackoverflow.com/questions/6742848/…
– camrymps
Jul 26, 2016 at 19:16
I'm looking for a JS translation from the VBS code for each drive in fso.drives, do you know what I mean?
– user5946522
Jul 26, 2016 at 19:57
Add a comment
|
|
I'm currently creating a small backup utility in JavaScript (running locally from a .js file, not in a browser) than continuously scans for removable devices and makes backups of them. It's basically finished, but I'm unsure how to scan for removable devices.
How do I query for removable devices in a for-loop?
|
Querying for removable devices in JavaScript
|
0
Assuming the database on the downed server shut down cleanly, and you have access to the disk containing all the datafiles, yes you can do this in the same way that we used to clone databases using a copy datafiles and re-create the control file.
Have you got a backup of the controlfile to trace, e.g.
ALTER DATABASE BACKUP CONTROLFILE TO TRACE;
Then locate the trace file, and amend it (new database SID).
If you don't have a controlfile backup, you might need to build it manually, the key thing being to ensure all the datafiles are included.
THis method also of course relies on the source and target database versions being the same, and the OS having the same endianess.
Share
Follow
answered Jul 24, 2016 at 12:09
TenGTenG
3,91233 gold badges2525 silver badges4444 bronze badges
1
hi sir how can i be in contact with you ? if you give me your email i'll be so thankful...
– Armin
Jul 27, 2016 at 22:09
Add a comment
|
|
i am researching on how to backup oracle database when my operating system is down
if you give me some useful comments or introduce me some useful sources and books or maybe video tutorial
i'll be so thankful
|
how to backup oracle databasae when operating system is down
|
0
Use the command
mysqldump -u user -p password --database tables --single-transection | gzip > database.tables.sql.gz
Share
Follow
answered Jul 19, 2016 at 6:40
Badal DudyBadal Dudy
122 bronze badges
Add a comment
|
|
I need to backup my phpBB forum's mySQL database. Should the forum first be disabled so that no new entries are made in the database? Or can I leave the forum live? And in that case, is the worst that can happen that the database would miss some of the newer entries (no big deal)? Or could I end up with a corrupt database backup file (big deal)?
|
What's the danger of backing up a database while being used?
|
0
There is a Microsoft support page for this Error Message 3013.
It is apparently caused when a filemark in your backup device could not be read. Resolution steps below:
To allow SQL Server to perform new backups to the backup device, you
must manually delete or erase the device by using the following
command:
BACKUP DATABASE mydatabase TO DISK='C:\MyDatabase.bak' with FORMAT
Share
Follow
answered Jul 11, 2016 at 21:24
Cpt. MonacCpt. Monac
76933 silver badges77 bronze badges
2
I have tried that, still get the same error. If you read my statement, I did put WITH FORMAT in my statement.
– Drewber81
Jul 12, 2016 at 12:53
I also tried saving to an external drive on my personal laptop from the server via a unc path, and that didn't work. So, my guess is there is something wrong with the database itself.
– Drewber81
Jul 12, 2016 at 13:06
Add a comment
|
|
I have tried to backup in Microsoft SSMS with the GUI backup task, and it fails after a few seconds, so then I tried running this command:
BACKUP DATABASE databasename TO DISK = 'd:\databasename_full.Bak' WITH FORMAT, MEDIANAME = 'd_datbasenamebackup', NAME = 'Full Backup of databasename';
And get a very generic error of the following
Msg 3013, Level 16, State 1, Line 1 BACKUP DATABASE is terminating abnormally.
I am wondering if anyone has come across this error before. Everything I have read is saying there is a media fault, which I know isn't the case.
|
I am trying to backup a database and am getting MSG 3013
|
0
You can open that large file using Large Text File Reader , split the file then manually adjust the last part of first file and first part of the 2nd file.
Share
Follow
answered Jul 11, 2016 at 4:40
praitheeshpraitheesh
4166 bronze badges
Add a comment
|
|
Currently I have a backup up SQL file of a MySQL database where the database is already dead. I want to rebuild the MySQL database again but when I import the SQL file, it says Got a packet bigger than 'max_allowed_packet' bytes, which I found the error is caused by the fact that the insert statement is too long.
I don't have the permission to increase the max_allowed_packet of the database. The whole file is around 5 GB and it is too painful to split the insert statements by hand. Is there any tool I can automatically split long statement into 2?
|
Is there a way to split one long insert statement into 2 in SQL backup file?
|
0
It seems like there is a permission issue to the files and folders which you are tying to backup to Azure. Please check if the folders or the drive you are backing up is formatted in NTFS.
Thanks
Hope this help.
Share
Follow
answered Jul 8, 2016 at 10:26
KunalKunal
771212 bronze badges
Add a comment
|
|
We are using MS Azure Backup to backup our files from a specific folder on a local disk to an Azure backup service however it is not updating the cloud version of some files when they have been updated locally.
The errlog has recorded a number of the following errors
Failed: Hr: = [0x80070005] : CreateFile failed \?\Volume{...}\ with error :5
More worryingly the jobs in question on the jobs list show successful with no indication of any issues.
I only discovered this because 1 job from 3 days ago was tagged as having warning which appears to be a connectivity issue somewhere and came across these entries in the log.
Would someone be able to
Indicate how we can get these changed files to be backed up?
Answer why the MS Azure Backup jobs are listed as successful when these warnings have been recorded?
Thanks
Gavin
|
MS Azure backup failing to backup new versions of files
|
You can't restore over an existing database using nbackup. You either need to
delete the old database first and then restore,
or restore under a different name, delete the old database, and rename the new database to its final name.
See also the nbackup documentation, chapter Making and restoring backups:
If the specified database file already exists, the restore fails and you get an error message.
As far as I know it was a design decision to not allow overwriting an existing database. Gbak indeed has that option, but only for historic reasons; if it were built today, it would likely not have that option.
|
I'm configuring live backup and restore scripts to have "replicated" firebird dbs on main and reserve servers.
Backup doing fine:
"C:\Program Files\Firebird\Firebird_2_5\bin\nbackup" -B 0 "D:\testdb\LABORATORY_DB.FDB" D:\testdb\lab_FULL.fbk -user SYSDBA -pass masterkey -D OFF
Copying file to the remote server as well:
net use R: \\fbserv2\reserve
xcopy /Y D:\testdb\lab_FULL.fbk R:\
But restoring on remote side
"C:\Program Files\Firebird\Firebird_2_5\bin\fbsvcmgr.exe" fbserv2:service_mgr -user SYSDBA -password masterkey -action_nrest -dbname d:\reservedb\LABORATORY_DB.FDB -nbk_file d:\reserve\lab_FULL.fbk
caused an error:
Error (80) creating database file: d:\reservedb\LABORATORY_DB.FDB via copying from: d:\reserve\lab_FULL.fbk
The only way to restore database is to manually delete an old d:\reservedb\LABORATORY_DB.FDB before restoring. GBAK has the option to overwrite restorig db file, while fbsvcmgr seems to be not. Is there any other option? Did I miss something?
|
Restoring Firebird 2.5 with fbsvcmgr
|
I tend to see a performance boost when using SqlCommands to backup databases.
Sub Backup()
Dim con As New SqlClient.SqlConnection("data source=DATASOURCE;initial catalog=NAME OF DATABASE;Integrated Security=True")
Dim cmd As New SqlCommand()
Try
con.Open()
cmd.CommandType = CommandType.Text
cmd.CommandText = "Backup database BQDB To Disk='C:\Users\Zulfikar\BQBackup.BAK'"
cmd.Connection = con
cmd.ExecuteNonQuery()
Catch ex As Exception
MessageBox.Show(ex.Message, "Error", MessageBoxButtons.OK, MessageBoxIcon.Error)
End Try
End Sub
|
Currently I used to back up my files in the way that, when user click on Backup the program will ask,
To Backup you must close your current session. This application will be closed now. Do you want to continue?
So the application will be closed and a new application will be launch in which if you click Backup it will copy the .mdf File and the .ldf File
|
But I have read in many pages that 'Copying the .mdf File and the .ldf File' is the unsafest way, so is there any other way to do Backup other than using SSMS because I want the user to be able to Backup within the Application.
|
Current code:
Sub Backup()
Dim con As New SqlClient.SqlConnection("data source=.\SQLEXPRESS;initial catalog=BQDB;Integrated Security=True")
Dim cmd As New SqlCommand()
Try
con.Open()
cmd.CommandType = CommandType.Text
cmd.CommandText = "Backup database BQDB To Disk='C:\Users\Zulfikar\BQBackup.BAK'"
cmd.Connection = con
cmd.ExecuteNonQuery()
Catch ex As Exception
MessageBox.Show(ex.Message, "Error", MessageBoxButtons.OK, MessageBoxIcon.Error)
End Try
End Sub
|
Error Message Using Justin's Code
|
How to Backup Database with Vb.Net without SSMS?
|
0
Take a look at this link.
The maximum backup size is 10gb.
You can scale up to premium but the max backup size will stay the same. Half way through the article they describe how to exclude files/folders from the backup (if that helps).
Share
Follow
answered Jun 29, 2016 at 4:37
FrolickFrolick
2611 silver badge66 bronze badges
Add a comment
|
|
I have a webapp scaled at S3 Standard with 50GB storage. Trying to setup a backup and ran a manual backup but failed saying "The website + database size exceeds the 10 GB limit for backups. Your content size is 15.9 GB." Any idea?
|
Azure webapp backup fails
|
0
Have you setup the Azure backup vault?
I found this guide pretty useful. It details each step to get it all set up.
https://azure.microsoft.com/en-us/documentation/articles/backup-azure-microsoft-azure-backup/
Share
Follow
edited Jun 23, 2016 at 13:29
answered Jun 23, 2016 at 13:11
totalfreakingnoobtotalfreakingnoob
41311 gold badge99 silver badges2525 bronze badges
1
Yes, I've set-up the azure backup vault...no issues at all. I've now installed MABS on a local VM, I need to set-up a local storage group where the backups are staged before they get sent up to Azure.
– KrisM
Jun 23, 2016 at 13:28
Add a comment
|
|
I'm going to trial MABS. I've set-up azure Resource Group and this is registered in MABS.
However, when trying to set-up a local storage group, I have no available disks to add? I'm obviously missing something really simple...but what?
|
Microsoft Azure Backup Server
|
This require a well designing of 3 tier architecture of your app.
Will give you a short info on how you can achieve it,
But it requires an R&D and effort if you are working alone.
Step 1. Create a column in your local storage (sqlite or coreData) which will represent timeStamp.
Step 2. Create Helper class in order to do fetching and updating functionality.
(using delegates or callbacks)
something like
-(void)sendAllImagesToServer:(NSString*)aBaseURL imageData:(NSData*)aData completionBlock:(void(^)(BOOL isSuccess))aSuccessCallback {
}
Step 3. If this is 2 way communication like, If server updates and you need to get updated image then write a method to do that too.
In order to get understanding of 3 tier architechture and Uploading multiple images and something about sync functionality
|
I need some help to get all photos from local storage and send those to server for Backup purpose. I am able to get those by using AssetsLibrary framework, but app got crash due to RAM memory usage. Is there any way to upload all my images to server and later based on time(Daily backup) I need to send only which are not uploaded earlier.
|
How to get all photos(>200) from device and send to server for Backup
|
Lets be clear about one thing first, SQL Server Express , Standard and Enterprise are the Editions.
SQL Server 2005 , 2008, 2008 R2 and 2012 are SQL Server versions.
Now coming to your question whether you can restore a database from 2008 Express to 2012 Enterprise?
The Simple Answer would be YES you can.
A couple of things to keep in mind.
SQL Server Editions has no limitation you can backup and restore databases from one Edition to another without any restrictions (Except SQL Server Express which can only accommodate databases upto 10 GB). Other than this size limitation in SQL Server express, all is good.
Whereas SQL Server Versions has a very strict limitation, You can go up 3 versions but cannot go down at all.
For example if you had taken a backup on SQL Server 2008 (regardless of Edition) you can restore this backup onto a SQL Server 2008, 2008 R2 , 2012 and I think also on 2014. But you cannot restore this onto a SQL Server 2005.
I hope the explanation clears some confusions.
|
Can I use a SQL Server Express database backup file to restore that database on a full fledged version of SQL Server. I am particularly looking at SQL Server 2008 Express to SQL Server 2012 Enterprise. And if so how?
|
SQL Server express backup
|
0
Search for Oracle DBA concepts on the internet to find many helpful documents.
For example, here is a link to a .pdf that does a good job of explaing Oracle concepts:
Oracle Database Concepts
NOTE: links don't always stay valid, of course, so go ahead and download the .pdf file and see if that helps.
Share
Follow
answered Jun 14, 2016 at 19:00
tale852150tale852150
1,63833 gold badges1717 silver badges2323 bronze badges
Add a comment
|
|
I was searching over the internet to find good explanations of flashback, backup and checkpoint, but I find it hard to understand the difference.
Both flashback and backup can revert database to the previous state. Flashback can fix logical failures, but not physical failures.
Redo logs - store all changes made to the database, used to apply changes since latest backup
Checkpoint - when we update database, physical files aren't updated right away, but all changes are saved in redo logs to improve performance. Checkpoints are points when those changes are flushed to the database.
Sorry for my bad English. Could somebody explain those terms in more details ?
|
Flashback, backup, checkpoint, redo logs
|
0
Try to exclude all VHD files and adding cbengine.exe to the Trusted applications list, see this article:
http://www.cantrell.co/blog/2016/3/21/microsoft-azure-backup-feat-kaspersky
Share
Follow
answered May 30, 2016 at 11:51
Indigo8Indigo8
61011 gold badge55 silver badges1010 bronze badges
2
Done that but it still failed. N.B. I excluded the Folder VHD. Should I have saved each individual file within the folder?
– user2204315
May 30, 2016 at 17:06
Later: found how to include all files within the folder ( put * at the end of the folder name). Backup still fails.
– user2204315
May 30, 2016 at 17:17
Add a comment
|
|
Since installing Kaspersky Total Security two days ago, my Azure backups keep failing. This is the process: Taking snapshot of volumes; Preparing storage; Estimating size of backup items; job failed. The error message for each volume is 'unable to find changes in a file. This could be due to various reasons (0x07EF8).
Data in my files has definitely been changing. I have tried two things: 1. Disabled Kaspersky. 2. Completely deleted the backup and rebuilt from scratch. This made no difference at all.
|
Azure Backup fails after installing Kaspersky Total Security
|
0
The author gave an answer here:
https://github.com/ncw/rclone/issues/498
Seems to be related to the server software.
Share
Follow
answered Oct 19, 2016 at 13:55
greggreg
71711 gold badge99 silver badges1616 bronze badges
Add a comment
|
|
I'm trying to use the "--files-from" options to limit scanning disk access.
I provide a list of 10 files, but with the verbose option I see that rclone is scanning thousands of files, is it the normal behavior?
Thanks in advance
greg
rclone v1.25 - debian 8.2 kernel 2.6.32-39 i686
target is hubic
|
rclone "--files-from" scans a lot of other files?
|
0
Try this:
mongorestore --db DBNAME --collection categories --host host.mlab.com --port 1111 --username username --password password categories.bson
It will restore specific .bson to that collection
Share
Follow
answered May 27, 2016 at 8:47
AtishAtish
4,35722 gold badges2626 silver badges3535 bronze badges
0
Add a comment
|
|
I have a mongoDB remote and I want to restore only some collections to the remote mongodb. Any suggestions how to do that.
mongorestore -d DBNAME -c categories DBNAMENEW/heroku_mb4p0d3s/categories.bson
The above command works because it is in local. But same command doesnt work for remote
'mongorestore -d DBNAME -c categories -o --host host.mlab.com --port 1111 --username username --password password -d databasename/categories.bson'
Any idea where am I going wrong?
|
Mongodb restore some collections
|
0
A daily snapshot (using mysqldump) on a remote server plus one or many slaves in different geographical location is enough:
If you loose the master: the slave have up-to-date data
If you loose some data: The master and the slave store queries in replication binary log
If you loose everything: You still have the daily backup.
But if you're storing real money in a MySQL table, please consider using cryptographic proofs of the state of the database like by storing a checksum (like a sha256) of each transaction salted with the previous one so nobody can update a single transaction without updating every checksums of all other after the modified one: sha256(current_transaction, previous_sha1).
You may also document yourself about "write only databases", which is what signing rows forces you to do : deleting a raw destruct the sha1 chain, marking the table as obviously tempered.
Can't write enough in a single response about each and every security measure you can take while storing money, but obviously backup is not enough (they can be modified too (sha256 "proofs" in each lines of each backups can be altered too...).
But you may document yourself in the direction of certifications like the "Payment Card Industry Data Security Standard" (PCI DSS) which probably contains a lot of information about what you're trying to achieve.
Share
Follow
answered May 22, 2016 at 14:19
Julien PalardJulien Palard
9,74633 gold badges4141 silver badges4545 bronze badges
Add a comment
|
|
I'm doing a project where users upload money on the system, after the database queries will increase and decrease this initial amount.
I have to do a backup that allows me, in case the server is broken, to reconstruct the identical situation before the break without losing EVEN ONE TRANSACTION.
Considering that the database does not weigh a few MB, and that will be made 3/4 queries per hour, I think about doing an incremental backup every performed on the database transaction.
Is there a program that when the database is made to a query that modifies data (INSERT, UPDATE), takes a backup ?
|
MYSQL - Database backup every transaction
|
0
I found this when I was trying to help somebody else find a file they recently accessed
In Windows 8.1 there is something called "Recent Places" under Favorites in File Explorer. This was in the same favorite list where I had kept Recent Items and still did not notice it because of getting panicky. This showed me the folders I had accessed something I really wanted a week back. It would have saved me so much tension and my precious time.
Now am planning to update to Windows 10 and google searched if I will still have access to this data and found this
http://answers.microsoft.com/en-us/windows/forum/windows_10-files/restore-recent-places-to-windows-10/037af727-9b06-485e-bb45-4a6c60a3f222?auth=1
Hope this is useful to someone
Share
Follow
answered May 28, 2016 at 15:07
MeenoharaMeenohara
31411 gold badge44 silver badges1010 bronze badges
Add a comment
|
|
I know this question has been apparently asked here and here
But mine is different.
Do file histories include only extensions such as pdf, jpg, mp3, doc
etc
File history for moved files is available not just deleted ones?
At preset I am accessing C:\Users\Myname\AppData\Roaming\Microsoft\Windows\Recent folder
But here I am not able to see recently modified files which come under a directory in Users\Myname folder. Why do all recent files not get mentioned here?
Is there place where these settings can be changed?
Is it possible to look up recently accessed/modified folders?
I have a developing background in assembly and C but restarting after more than 6 years. I saw other threads where they were doing things programmatically but did not understand much and looked their requirements were different from mine. I am willing to try out programmatic solutions if an online source is pointed to.
I take a regular back up of my files, but yesterday happened to give my PC into someone's hand when learning something and the person was an impulsive shift deleter not even bothering with the messages on the PC and was not very aware of what was being done or happening.
Question 2 is because I have earlier accidentally moved folders into another folder in a previous PC
|
File History in Windows 8.1
|
0
What is the issue you are running in to?
The documentation for specific CmdLet is here.
You will need the Azure Powershell Cmdlets.
Guidance on how to install the Azure Cmdlets is here.
You will also need to login to Azure and make sure you are pointed at the right tenancy. Some code to assist with this is below. Give more details on the error you are receiving and that will help people to give a more specific answer.
# To login to Azure Resource Manager
Login-AzureRmAccount
# You can also use a specific Tenant if you would like a faster login experience
# Login-AzureRmAccount -TenantId xxxx
# To view all subscriptions for your account
Get-AzureRmSubscription
# To select a default subscription for your current session
Get-AzureRmSubscription –SubscriptionName “Free Trial” | Select-AzureRmSubscription
# View your current Azure PowerShell session context
# This session state is only applicable to the current session and will not affect other sessions
Get-AzureRmContext
Share
Follow
answered Jul 25, 2016 at 8:06
Murray FoxcroftMurray Foxcroft
13.2k77 gold badges6161 silver badges8888 bronze badges
Add a comment
|
|
I have been using the code from https://github.com/Azure-Samples/service-fabric-dotnet-web-reference-app to create backups for our current project, but I am unable to invoke the data loss method using power shell script to trigger the restore.
Does anyone have experience with this or have another method for creating backups and restoring them?
|
Restore Azure Service Fabric backup
|
0
To make a backup, the easy answer is to just use xcopy. To delete old files/directories, use the below command
forfiles -p "C:\what\ever" -s -m *.* -d <number of days> -c "cmd /c del @path"
Share
Follow
answered May 9, 2016 at 17:20
TechnicalTophatTechnicalTophat
1,69522 gold badges1515 silver badges3737 bronze badges
4
Days are no good, this is to run during a shift to keep backups of shared excel sheets that regularly break.
– J.Burton
May 9, 2016 at 17:29
How often do you need it to backup?
– TechnicalTophat
May 9, 2016 at 17:53
I say in the post. It's a choice in seconds. SET Schedule=60 It could be 60, or 1000
– J.Burton
May 9, 2016 at 18:06
@J.Burton Maybe do something like this in C#, doing this in cmd is going to absolutely hammer your processing power. Not only are you copying a large excel sheet over the network, you are also deleting copies of the previous excel sheet no matter if it's changed or not, all the while running your tasks Synchronously. If you want I'll give you a hand but this as a batch script isn't going to perform as well as an ASync C# application
– TechnicalTophat
May 9, 2016 at 18:25
Add a comment
|
|
Premise:
Script to run every n seconds which will create a backup of a defined file to a defined location.
After n backups have been created, clean(delete) out dated ones.
Problem:
I have managed to get working a version of this to backup a folder and delete older verisons, but when I attempt this with a specific file "no files are found".
I've scratched my head for several hours about this now, I'm probably missing something small.
I do not simply want to delete all .xlsx files, for example, as there is the possibility that there will be multiple different .xlsx files in the Target folder.
I only want to delete old versions of "File1" if there are 3 newer versions available.
Is there someway I can do this? I've tried with a wildcard, as you can see below, but no luck...
Help please :(
Vars:
- Schedule=60 (Time inseconds between each backup)
- NumFiles=3 (How many backup versions to keep)
- File1Path=D:\Source (Location to copy files from)
- File1=backmeup.xlsx (Filename & extension)
- BackupPath=D:\Target (Location to copy files to)
Code snippet:
@echo off
Color 02
mode con: cols=150 lines=25
:Start
SET Schedule=60
SET NumFiles=3
SET File1Path=D:\Source
SET File1=backmeup.xlsx
SET BackupPath=D:\Target
echo Press any key to begin.
pause >nul
echo.
:Single1
FOR /F %%a IN ('WMIC OS GET LocalDateTime ^| FIND "."') DO SET DTS=%%a
SET DateTime=%DTS:~0,4%-%DTS:~4,2%-%DTS:~6,2%@%DTS:~8,2%%DTS:~10,2%
IF DEFINED File1Path (
xcopy "%File1Path%\%File1%" "%BackupPath%\_BACKUP_%DateTime%_%File1%*" /h /d /y /c /i /q /r /k >nul
( for /f "skip=%NumFiles% delims=" %%F in ('dir "%BackupPath%" /b /a:-dh /o:-d "%File1%"') do echo del /q "%BackupPath%\%%F"
( goto SingleDone ))) ELSE ( goto End )
:SingleDone
echo Backups created - %DateTime%
TIMEOUT /T %Schedule% /NOBREAK
GOTO Single1
:End
echo Something is not defined or incomplete.
echo Press any key to exit.
pause >nul
exit
|
BATCH - Create a backup, then after n created, delete old backups
|
Ultimately I wanted to be able to perform efficient Cron based nightly backups while minimizing the chance of data corruption and being able to move the backup of site with encryption. I also needed a time machine capability in the event that for some reason something something did get corrupted (Even though going back to a previous commit is an option as well). Turns out duplicity has all of that built in and is a perfect fit:
https://help.ubuntu.com/community/DuplicityBackupHowto
So that's what I'm rolling with. I'm also planning to switch off the health checks and perform them outside of gogs on a separate CRON job, but I'm still researching the recipe for that. If anyone has tips please comment.
|
This question is related to this question which targets only the gogs-repositories component of gogs / git:
Hotback of Git Server Using RSync?
Gogs also performs 'health checks' on the git repositories. What do these health checks do? Could they mutate the state of the repositories? If so could that cause corruption is the repositories are backed up using RSync?
TIA,
Ole
|
Performing Hotbackups of Gogs
|
0
Open SQL File using a Text editor then search for the specific database, in my case there was a comment declaring where it began. Then copy all the tables into a seperate file with extension .sql
Share
Follow
answered Apr 28, 2016 at 7:43
Declan WattDeclan Watt
7122 silver badges88 bronze badges
Add a comment
|
|
I'm moving a website and I backed up all of my databases from the old host into one total sql backup file.
I need to restore a specific database inside this file to my new host that is used for the wordpress site.
How would I achieve this?
cheers
|
How to restore specific datebase from total SQL backup?
|
0
Its pretty straight forward. Use cqlsh to log onto your local Cassandra node, create the same schema (describe schema from node your copying) and copy the data from your table_name1 dir to:
data_dir/keyspace_name/table_name1/*
on local system. for each table.
Share
Follow
answered Apr 23, 2016 at 2:06
Chris LohfinkChris Lohfink
16.3k11 gold badge3131 silver badges3939 bronze badges
3
Does'nt work cqlsh> USE keyspace_name; InvalidRequest: code=2200 [Invalid query] message="Keyspace 'keyspace_name' does not exist" This is my step by step: 1. sudo service cassandra stop (local node) 2. sudo rm -rf /var/lib/cassandra/data/system/* (local node) 3. sudo rm -rf /var/lib/cassandra/commitlog/* 4 .sudo rm -rf /var/lib/cassandra/data/system/* 5. sudo cp -r /media/usb/kyspace_name /var/lib/cassandra/data/ What am I doing wrong? Thank you
– Jmv Jmv
Apr 27, 2016 at 20:22
did you CREATE KEYSPACE keyspace_name ... ? as part of the "Create the same schema"
– Chris Lohfink
Apr 27, 2016 at 20:24
Yes, but only the keyspace_name, I can't create the column family (tables) because I don't know these. I mean, I don't know how it is built the database for that reason I want to restore. Thank you a lot
– Jmv Jmv
Apr 28, 2016 at 14:48
Add a comment
|
|
Regards community,
I have the files (folders in a usb) from a cassandra database such as: /var/lib/cassandra/data/keyspace_name/table_name1/table_name2, and I want to know the way/process to restore in my local cassandra node.
Thank you
|
Restoring Database in local node Cassandra
|
0
WAL recovery assumes you have the same starting out point for your data directory in including tables, tablespaces, etc. If you have lost a tablespace before applying your wal segments, you need to see what you can do to get the right database backup. If this was missed, you will not be able to restore data that was initially in the table.
Data which is in the WAL may be recoverable by a real expert but it will not likely be complete.
In other words, go back and look at your backups again.
Share
Follow
answered Apr 23, 2016 at 14:16
Chris TraversChris Travers
25.8k66 gold badges6868 silver badges186186 bronze badges
Add a comment
|
|
I'am facing an problem while restoring and recovering of cold backup with WALS. Actually my database storage as two tablespace. I have created one seperate tablespace located in another disk which takes data from it ie., tables which are in other tablespace not default tablespace. I'am getting error while restoring the cold backup into another server like below:-
could not open tablespace directory "pg_tblspc/132528327/PG_9.1_201105231": No such file or directory.
Actually the server is up and running fine after recovery completion with archives but the data changes which are in different tablespace not recovered only restored data is coming. Please advice how to apply archives (WALS) on the tables which are in different tablespace of different storage.
|
could not open tablespace directory "pg_tblspc/132528327/PG_9.1_201105231" while cold backup restoration which is having two different tablespace
|
If anyone else has this question: I found Git to be a great way to achieve this purpose. Unlike many version control systems, it keeps the change repository on the local machine and only places it on a server when the project is merged. As of 2015, it integrates well with Visual Studio and TFS. Here is a video from the Build conference explaining its integration with VS2015:
Channel 9 - using git in visual studio
https://channel9.msdn.com/Events/Build/2015/3-746
|
In the process of building an ASP.NET Core MVC rc1 application with SQL databases, c#, bootstrap, angular, css, javascript, javascript dependencies, package managers like bower, or any visual studio project for that matter, I sometimes break the application and would like to roll it back to a previous state when the application was working.
What are some techniques/the best way to create incremental versions, save and flag working versions, and rollback to earlier versions especially when a project has so many moving parts, technologies, and dependencies?
I would prefer a technique that exists inside Visual Studio, or the most standard/popular Microsoft or open source technique or tool that may be free.
I also would like the option to do the backups on my local machine rather than on an external server.
|
How can I rollback to earlier versions of a Visual Studio project while developing locally?
|
0
Nothing ready-made that I am aware of. You should just code this yourself in bash and call the script from cron. Use find . -type d -mtime +5 -maxdepth 1 -exec rm -r {} \; to remove backups older than 5 days. Incremental backups using innobackupex are based on a full backup so you need to be able to reference that directory in your script. I've written many of these for my clients (I work for Percona), so it isn't hard. Shouldn't be more than 20-30 lines of bash.
Share
Follow
answered Jun 28, 2016 at 21:05
utdrmacutdrmac
73155 silver badges1717 bronze badges
Add a comment
|
|
Innobackupex provides both full and incremental backup of mysql servers.
But i am looking for a script that automate the process of daily full backup and incremental in certain hours.
The script will remove old backup files and mail the status etc.
Any idea or readymade script ?
Thanks
|
How to set up rotational full and incremental innobackupex?
|
0
Error was that "The identifier start with---- is too long. Maximum length is 128"
so I Make the "MyDatabase.mdf" with small Name "MyDb.mdf".So the identifier becomes less than 128.
My code is
SqlConnection con = new SqlConnection(ConfigurationManager.ConnectionStrings["NewConnectionString"].ConnectionString);
{
con.Open();
string DatabaseName = Application.StartupPath + @"\MyDb.mdf";
SqlCommand cmd = new SqlCommand("BACKUP DATABASE ["+DatabaseName+"] to DISK='D:\\MyBackup.bak' ", con);
try
{
cmd.ExecuteNonQuery();
MessageBox.Show("Success");
}
catch (Exception Ex){
MessageBox.Show("'"+Ex.ToString()+"'");
}
con.Close();
}
successfully take the Backup
Share
Follow
edited Oct 17, 2016 at 7:08
FlySoFast
1,89266 gold badges2727 silver badges4848 bronze badges
answered Apr 9, 2016 at 14:10
DeveloperDeveloper
122 bronze badges
Add a comment
|
|
string dbpath= System.Windows.Forms.Application.StartupPath;
string dbp = dbpath + "\\MyDatabase.Mdf";
SqlCommand cmd = new SqlCommand("backup database ['"+dbp+"'] to disk ='d:\\svBackUp1.bak' with init,stats=10",con);
cmd.ExecuteNonQuery();
|
How to take Back up of LocalDB C# my database file name is MyDatabase.mdf
|
I already solved it wih CURL upload:
curl --interface eth2 --ftp-create-dirs -T file.zip ftp://$user:$password@$ftp_server:$ftp_port/$date/$time/
But thanks for your answer ;)
Simon
|
I have this code for backups:
#FTP folder create
ftp -n -v $ftp_server $ftp_port << EOT
binary
user $user $heslo
mkdir $datum
cd $datum
mkdir $cas
EOT
Server is connected to VPN with one adapter and to local network with secondary adpater eth1. I need to backup files to local network but when I set local IP as $fpt_server variable, there is "Not Connected" error.
This server is running Debian 8 64-bit in VmWare Workstation enviroment.
|
Force eth1 for FTP connection
|
0
Looking at the docs it seems all of this is configured via your AndroidManifest.xml (configurable via tiapp.xml in Titanium) and res/xml files (stored under app/platform/android in Alloy 1.8+). So you should be able to use this with Appcelerator Titanium.
Share
Follow
answered Apr 6, 2016 at 10:53
Fokke ZandbergenFokke Zandbergen
3,86611 gold badge1212 silver badges2828 bronze badges
2
Thanks for your comment on this. I figured that would work with Android 6, but if you look at this: Android Developer link regarding supporting previous versions of Android, it says you need to use the Backup API along with a BackupAgent, which looks to be custom Java code?
– Pragmateq
Apr 7, 2016 at 14:55
Yep, that's correct. It's hard to say if that could be done in a Titanium module. It might be a good idea to create a JIRA ticket requesting for this to be researched, also as a way to achieve parity with iCloud backup of files for iOS.
– Fokke Zandbergen
Apr 8, 2016 at 11:33
Add a comment
|
|
In Android 6 it looks like Google has finally got its automatic backup service to include pretty much all app data in it's backups to a nominated Google account as long as android:targetSdkVersion="23".
However in versions of Android prior to 6, as I understand it, you need to implement a custom BackupAgent in order to include specific files into the Backup Service such as app-generated files and databases.
How might I achieve this in Appcelerator, would a custom module be required along with new entries in the Android section of tiapp.xml?
http://developer.android.com/training/backup/autosyncapi.html
http://developer.android.com/training/backup/autosyncapi.html#previous-androids
|
Anyone implemented Android Backup Service with android:backupAgent?
|
0
Yes, you can use the preSync and postSync MSDeploy operation settings:
msdeploy -verb:sync -preSync:runCommand="net stop w3svc" -source:webserver60 -dest:auto,computername=serverA -verbose -postSync:runCommand="net start w3svc"
https://technet.microsoft.com/en-us/library/ee619740(v=ws.10).aspx
Share
Follow
answered Apr 4, 2016 at 14:14
chief7chief7
14.3k1414 gold badges4848 silver badges8181 bronze badges
Add a comment
|
|
We have an ASP.Net 4.5.2 WebForms application in Visual Studio 2015. We want to create a Web Deploy package that:
Backs up certain folders & files on the target system/IIS server
Deletes the old files
Copies the new files
Copies the backup up files back
Possibly sets some folder file permissions
Is WebDeploy the right tool for this? Or is it too basic for such "pre" and "after" tasks?
Would the runCommand provider be the way to go?
https://technet.microsoft.com/de-de/library/ee619740(v=ws.10).aspx
Any hints would be appreciated
|
Run before & after scrips with .Net Web Deploy
|
0
-regard you first question you can restore previous backup from iTunes ,there is not need of wifi
-Step 1 - connect your phone with iTunes choose ->you's iphone
-Step 2 - choose backup --> click on Restore Backup
- step 3 ->
Find My iPhone must be turned off before “your's iPhone” can be restored.Go to iCloud,
Settings on your iPhone and turn off Find My iPhone before restoring your iPhone.
-Step 3 ->
Windows Vista, Windows 7 and Windows 8 back up the iPhone files here:
C:\Users\user\AppData\Roaming\Apple Computer\MobileSync\Backup,
Mac OS X your iPhone files are backed up at the following location:
~/Library/Application Support/MobileSync/Backup/
-step 4 - finish your restoration
Share
Follow
answered Apr 20, 2016 at 12:10
Pankaj PatelPankaj Patel
111 bronze badge
Add a comment
|
|
I have a question concerning iOS backup-restore. Is there a way to restore an existing iCloud Backup, that I made, via iTunes and not directly over Wifi?
And vice versa. Can I make an backup via iTunes and save it as an iCloud backup or can iTunes only store backups locally?
Thanks in advance!
|
Restore iOS iCloud Backup via iTunes
|
0
There are 3rd party tools that can assist in what you are asking for:
APEX SQL Recover
Idera SQL Virtual DB
They may require registration but offer a fully functional trial to get the job done.
Share
Follow
answered Apr 5, 2016 at 14:30
gsc_dbagsc_dba
10388 bronze badges
Add a comment
|
|
Problem:
I need to restore only one table from a backup. How should I do it in a course of action?
Thanks for you help!
|
Restore one table only in SQL server
|
0
First, ZIP it
Second, you can try to upload it to S3 and then either download it directly from there, or create a CloudFront distribution for that S3 bucket and download it through CloudFront.
I'm not sure that all in all it would be faster (because it would also take time to upload to S3) but it's worth a shot.
Also, if the database contains any personal or sensitive data, keep security in mind if you plan to allow public access to the uploaded file.
Share
Follow
answered Mar 19, 2016 at 9:16
obeobe
7,60866 gold badges3131 silver badges4444 bronze badges
Add a comment
|
|
I am using aws amazon machine for my web application. currently my database dump size is 15 GB. I am trying to download database dump with scp command to my local machine. to download 15 GB database dump it is taking around 1 hour.
i want to know the fastest way to download the database dump from remote machine to local machine.
|
How to transfer large database dump VERY FAST from remote aws machine to local machine
|
0
Every system I know that stores large numbers of big files(Media) stores them externally to the database. So you will have reference of actual media file.
Also, if you're going to have thousands of media files, don't store them all in one giant directory - that's a performance bottleneck on some file systems.
Instead,break them up into multiple balanced sub-trees.
Share
Follow
answered May 4, 2016 at 12:22
Bavishi PNBavishi PN
37922 silver badges66 bronze badges
Add a comment
|
|
I own a web app and I need to download the latest backup of it. I don't need the media like images, audios (lots of it), videos, etc. I only need the code, and specifically, only the code that was built by us, don't need the server config files and other low-level stuff.
As you know I have several options: Full Backup, /home directory backup, Database backup, etc.
The problem is that if I do a full Backup I might download 30GB and, as I mentioned I don't need all the heavy media files. And I don't know if /home directory will do the same as the media is stored in it.
And regarding the database backup, I am using a CMS so those media files are referenced inside the databases. The question is: is it only a reference or it's the actual media file stored inside the database?
It may sound dumb, but hey, we are not perfect.
|
what includes the cPanel website backup?
|
0
This depends what you want to backup.
If you want commit history, then copy .git.
If you need only newest status of source code, 'git achieve' is ok.
Before doing all these, be sure all your modifications have been committed.
Share
Follow
answered Mar 12, 2016 at 13:43
gzhgzh
3,56522 gold badges2020 silver badges2424 bronze badges
3
I want all source code, commit history, branch, just like original folder. so after I copy .git to new place, what should I do?
– user1575921
Mar 12, 2016 at 14:13
@user1575921 Copy .git to a new folder, e.g. Backup, and run git branch to list all branches and checkout worktree as you want under Backup folder. Or use it as a remote repository, git clone file:///path_to_backup/.git
– gzh
Mar 12, 2016 at 14:21
thanks for reply, I still don't get it. after copy .git folder to new place new and cd ./new and git branch -a that just list all branch but how to bring back the all source code files?
– user1575921
Mar 13, 2016 at 17:55
Add a comment
|
|
I have a project use git on local ,
if I want to back up to another hard disk, should I have to copy all files and .git folder.
I tried git clone /projectpath/ /backuppath/ but that still create a copy folder with the source code files.
Can I just copy the .git folder and there is a way recover all files from it save the disk space.
if so how to do it?
I want keep all branch and all source code ...
|
backup and restore files from .git
|
Well look at this bit:
Get-ChildItem -Path $path -Recurse -Force -EA SilentlyContinue|
Where-Object { !$_.PSIsContainer -and $_.CreationTime -ge $limit } |
Remove-Item -Force -EA SilentlyContinue
It's basically saying "everything not a folder and older than specified is removed". So your first step is to remove that.
The second part is just deleting empty folders, you can keep it as-is or you could add to the Where statement to include the CreationTime:
Get-ChildItem -Path $path -Recurse -Force -EA SilentlyContinue|
Where-Object { $_.PSIsContainer -and $_.CreationTime -lt $limit -and (Get-ChildItem -Path
$_.FullName -Recurse -Force | Where-Object { $_.CreationTime -lt $limit })
-eq $null } | Remove-Item -Force -Recurse -EA SilentlyContinue
The second Where statement returns a list of files and folders newer than $limit, and only deletes the folder if that is null.
|
I have the following code to keep on top of old folders which I no longer want to keep
Get-ChildItem -Path $path -Recurse -Force -EA SilentlyContinue|
Where-Object { !$_.PSIsContainer -and $_.CreationTime -lt $limit } |
Remove-Item -Force -EA SilentlyContinue
Get-ChildItem -Path $path -Recurse -Force -EA SilentlyContinue|
Where-Object { $_.PSIsContainer -and (Get-ChildItem -Path
$_.FullName -Recurse -Force | Where-Object { !$_.PSIsContainer })
-eq $null } | Remove-Item -Force -Recurse -EA SilentlyContinue
It deletes anything older than a certain number of days ($limit) including files and folders.
However, what I am after is ONLY deleting old folders and their contents.
For example, a day old folder may have file within that is a year old but I want to keep that folder and the old file. The code above keeps the folder but deletes the file. All I want to do is delete folders (and their contents) within the root that are older than the $limit else leave the other folders and content alone.
Thanks in advance.
|
Powershell - delete old folders but not old files
|
0
The in-memory database has no pages because it is empty.
An attached database stays separate, i.e., its data is not merged into the backup.
To backup an attached DB, you must give its name (and not "main") to sqlite3_backup_init().
Share
Follow
edited Mar 11, 2016 at 15:28
answered Mar 7, 2016 at 7:59
CL.CL.
176k1717 gold badges226226 silver badges269269 bronze badges
1
I know (as i described it has no pages), but what should be the conclusion ? My in-memory source database is not empty (right before backup, it is created with 1 table, 5 columns and 500 rows inserted) but the backup is empty: no tables, no data.
– nullptr
Mar 7, 2016 at 13:01
Add a comment
|
|
i want to have some sqlite database running in memory.
I can load a file based database into a memory database,
i can do a backup of file based database but what fails is
backing up a memory database to a file.
I checked both samples, exposed here:
https://www.sqlite.org/backup.html
I mean, i used these examples.
The result is always SQLITE_OK on all sqlite function calls, execept 101
for
sqlite3_backup_step
in the 2nd example. To make sure there is no mistake, i checked the memory database having tables
and data. This is the case. Also, using the same backup function, works for filedatabase very well.
So far i could investigate, this line
nSrcPage = (int)sqlite3BtreeLastPage(p->pSrc);
in function
sqlite3_backup_step (sqlite3.c)
always returns 0. So the database has no "pages".
The file based database returns 35 at this point.
So it seems there is no kind of copy because no "pages"
given for my memory database; but this mem database definitely has tables
and data.
/// failing backup
bool backup_test_sqlite_mem2(void)
{
/// create a file db
sqlite3* file_db =create_db("filedb.db",true);
if(file_db != nullptr)
sqlite3_close(file_db);
/// we now have a database file
/// create an empty mem db
sqlite3* mem_db=create_db(":memory:",false);
/// attached prior created file to mem db
attach_db_test_sqlite_mem(mem_db,"filedb.db");
/// check we have content in mem db
do_select(mem_db, "SELECT count(*) FROM stock","rows in backup_test_sqlite_mem2");
/// finally back memdb
int ibackup= backupDb(mem_db,"memdb_from_attached_backup.db",nullptr);
/// the above backup is empty
return (ibackup == SQLITE_OK ? true : false);
}
|
sqlite backup memory database c++
|
You could have one process with two FileSystemWatchers.
1. The first watches an incoming file location, and moves (not copies) files from the incoming location to an outgoing location.
2. The second watches the outgoing location and pushes files to the cloud.
In addition to the FileSystemWatchers, the process scans the incoming location on startup. That way if it was down and new files were added, when it restarts those new files still get moved to the outgoing location. While the process is down nothing is getting moved to the outgoing location so there's nothing for it to miss.
Update
I suppose it also depends on the nature of the files. If you need greater reliability then you could build a more robust process, capturing the details of any file in the location and enqueuing a list of files to be copied (perhaps in a table.) That way you don't have to rely on the presence or absence of file to determine status.
|
I want to use FileSystemWatcher to immediately push newly generated files to the cloud.
My concern is that if the app which is doing the watching is shut down for some time then it will miss some files and they'll never make it to the back-up.
Is there anyway around this? Or should I use a message queue?
|
Use FileSystemWatcher to backup files
|
0
The error logs said unable to open database
"home/ec2-user/project/db/production.sqlite3": unable to open database file
Please check your production.sqlite3 path.
I think it should be "/home/ec2-user..."
Share
Follow
answered Feb 29, 2016 at 6:08
aldrien.haldrien.h
3,52522 gold badges3131 silver badges5555 bronze badges
Add a comment
|
|
I had a problem to backup database by using backup gem in rails.
this is my daily_db_backup.rb
Model.new(:daily_db_backup, 'Description for daily_db_backup') do
database SQLite do |db|
db.path = "home/ec2-user/project/db/production.sqlite3"
db.sqlitedump_utility = "/usr/bin/sqlite3"
end
compress_with Gzip
store_with S3 do |s3|
s3.access_key_id = "my_key_id"
s3.secret_access_key = "my_access_key"
s3.region = 'ap-northeast-2'
s3.bucket = 'project'
s3.path = 'home/ec2-user/project/db/production.sqlite3'
end
notify_by Mail do |mail|
mail.on_success = true
mail.on_warning = true
mail.on_failure = true
mail.delivery_method = :sendmail
mail.from = "[email protected]"
mail.to = "[email protected]"
end
end
and these are error messages
[error] Model::Error: Backup for Description for daily_db_backup (daily_db_backup) Failed!
[error] --- Wrapped Exception ---
[error] Database::SQLite::Error: Database::SQLite Dump Failed!
[error] Pipeline STDERR Messages:
[error] (Note: may be interleaved if multiple commands returned error messages)
[error]
[error] Error: unable to open database "home/ec2-user/project/db/production.sqlite3": unable to open database file
[error] The following system errors were returned:
[error] Errno::EPERM: Operation not permitted - 'echo' returned exit code: 1
then, how can i solve this problem?
|
backup database using gem "backup" in rails
|
Without moving to a new SQL Server version (Express supports none of the technology you need here); you could schedule regular/frequent backups with Windows Task Scheduler to push backups onto a shared drive on the laptop. Then either manually restore (on power loss), or schedule regular restore jobs on the laptop using Windows Task Scheduler. See MS link below.
Hacky, yep, but it should work - assuming you're not backing up a massive dB.
You'd be better off investing in a solid UPS which can keep your primary dB operational, or at least, to get it to shut down cleanly.
https://support.microsoft.com/en-us/kb/2019698
|
As a small medical ngo we would like to have a copy of our sql server (ms sql 2010 express) on a laptop.
So if the power goes down we can at least read (no updating) the data.
Because it's unpredictable when the power goes down and we need the latest available data, the backup sql-db should continually be updated (like once every x minutes, or on every change, not once a day).
How can we do this? thx!
|
second ms sql server as backup
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.