Response
stringlengths 8
2k
| Instruction
stringlengths 18
2k
| Prompt
stringlengths 14
160
|
---|---|---|
1
Maybe a stupid answer, but have you thought about a mirror?
Share
Improve this answer
Follow
answered Nov 25, 2008 at 14:18
Toon KrijtheToon Krijthe
53.1k3838 gold badges147147 silver badges202202 bronze badges
Add a comment
|
|
I have a database that I need to provide redundancy for.
It is Codebase, but using a typical SQL database uses too much CPU, and having the DB offsite causes too much latency in my process.
I need a viable solution for providing data redundancy with an offsite location for my time criticla process.
|
I need high performance data redundancy over IP
|
Your file seams to refer to database_db.mdf and database_db_log.ldf. Those info are generaly in database_db.bakfiles type. Try to add .bak extension to your file and the in an SQL SERVER instance, restore your database_db from this file.
|
I receive a file like this from my hosting provider when I downloaded the backup.The file had no extention and was inside a zip file. I want to read de data what is inside this backup or better yet throw all info in the tables if that's possble. I am afraid everything is gone, hopefully someone knows a way. I am using EntityFrameworkCore so the tables I have back using migration but it's the data that I need.
The file.
|
Restore MSSQL backup file
|
1
I use this bash function:
backup() {
local file new n=0
local fmt='%s.%(%Y%m%d)T_%02d'
for file; do
while :; do
# shellcheck disable=SC2059
printf -v new "$fmt" "$file" -1 $((++n))
[[ -e $new ]] || break
done
command cp -vp "$file" "$new"
done
}
I can't help you to integrate that into your desktop environment
Share
Improve this answer
Follow
answered Oct 24, 2022 at 14:17
glenn jackmanglenn jackman
243k4040 gold badges224224 silver badges355355 bronze badges
2
@glenn_jackman, I've rewriten your nested loops, with for file; do while printf -v new "$fmt" "$file" -1 $((++n)); do [[ -e $new ]] || break done; done It seems to produce the same results, what are the odds of it breaking at some point? :-)
– Jetchisel
Oct 24, 2022 at 14:53
1
check the manual: when does printf return a non-success exit status.
– glenn jackman
Oct 24, 2022 at 15:06
Add a comment
|
|
In Linuxmint, you can copy a filename and append the date with the following command in the terminal:
cp test.txt file_`date +"%Y%m%d"`.txt
This duplicates the file with the following name: "file_20221024.txt", which can be saved as a form of backup of the file. I would like to put this short command as an option when I right-click a file in my system: (Linuxmint Mate with Caja for folder structure).
I know it must be possible by invoking a saved file in my system with a routine that should assign the selected file to a function, but I don't know how to write it and implement it. Any help or guideline is welcome.
|
Duplicate same filename adding date as a form of dated backup
|
1
As @Fravadona suggested, your best bet would be to use a cron job. For such a short and self-explanatory command, I wouldn't even use a script per se.
You can add a new job just typing in your terminal sudo crontab -e for edit (since you want to run a command with sudo, this opens root's crontab). To better understand how the frequency is configured, you can use for example crontab.guru.
Then, you can add the new entry using your preferred text editor. As suggested in the comments, you could add a timestamp to the tar file name to keep track of it.
For example, a backup that runs every day at 08:00:
0 8 * * * /usr/bin/sudo /usr/bin/tar -cvpzf /home/backup_`date +%s`.tar.gz /home/data
Note: The date +%s command returns a unix timestamp.
Share
Improve this answer
Follow
answered Sep 9, 2022 at 7:52
bertbert
40255 silver badges1515 bronze badges
1
2
Replace +%s by for example +%F-%H%M%S to get a more readable time stamp.
– Wiimm
Sep 10, 2022 at 8:01
Add a comment
|
|
I want to write a bash script to backup my directories every day
this is my script
$ sudo tar -cvpzf /home/backup.tar.gz /home/data
how can I automate this
|
how can write bash script to get backup
|
You can move a Git repository from one path to another without affecting it. A Git repo is a self-contained entity (with the exception of global Git config) and the .git folder containing all Git files is just like any other folder (so it can be moved, deleted, etc.).
If you want to use OneDrive as a backup, there are better options than simply copying your project over. Tools like restic or similar would allow you to keep your backup up to date in an efficient way.
Note that you can also set OneDrive as a Git remote. Since it sounds like you are missing options such as GitHub or Bitbucket for your company, this is probably your best option.
To add a remote called onedrive on OneDrive, you can run:
git init --bare ~/OneDrive/<project>.git
git remote add onedrive ~/OneDrive/<project>.git
And then you can use onedrive as you would any other remote. For instance:
git push --set-upstream onedrive master
|
I just arrived in a company which does not have access to GitHub or similar.
I was wondering about ways to backup my code using the private OneDrive which is given to each employee.
I just thought of moving my folder from the "document" directory to the "OneDrive" directory. But I was wondering whether changing the path will break my code.
Will it preserve the .git directory? Should I rerun git init (loosing the past history)?
|
Backup options for a project under version control with Git
|
Once Database is deleted is gone. It can only be recovered if you have stored a backup of it somewhere else.
|
How can I retrieve the phpMyAdmin database.
currently I'm using a version of
here is the database server information
Server: Localhost via UNIX socket
Server type: MariaDB
Server version: 5.5.64-MariaDB - MariaDB Server
Protocol version: 10
User: root@localhost
Server charset: UTF-8 Unicode (utf8)
|
how to recover deleted database from phpmyadmin?
|
Use the Google
Next, you can see to https://sqlbackupandftp.com/ or https://sqlbak.com/ or anything else tool
Or https://dba.stackexchange.com/questions/135662/backup-sql-server-to-google-drive
Good luck
|
How to set up a sql job for automatic backup for a sql database (Database name: Test) to a Google drive
|
Backup sql database to Google drive using jobs
|
Look into sfdisk(8). Examples from man page:
sfdisk --dump /dev/sda > sda.dump # writes ascii description of partition layout to file
sfdisk /dev/sda < sda.dump # parses ascii description of partition layout from file
I would probably suggest using the --backup option instead of the above commands. The --backup option causes the actual contents of the partition table to be stored, rather than regenerating it from parsing the text file.
|
I have a small collection of Linux/Windows systems that I like to back up using Bash scripts. I like to back up everything, not just user data, so I use things like dd and ntfsclone. The details are different for each system, so to avoid confusion and errors I like to have it scripted.
I'm pretty happy with everything except the backup and possible restore of the partition tables on my drives. There is a mix of MBR and GPT disks.
I'm most familiar with Bash and Python for scripting, but will take whatever works. I'd like something that can be started (with backup media mounted) and left unattended.
Is there something that can do this for me?
|
how to dump/restore a HDD parttition tabel from the command-line (Bash)
|
Too long to comment, but this is usually done in an SQL Agent Job.
Split backups means 1 of (at least) 4 things which you'll need to get clarification from the vendor, if you don't already know. I excluded partial backups:
The vendor is sending all of the backups listed above in one day, meaning it's a base and 3 differentials or log backups (possible)
The vendor is only sending differential backups since they originally sent you the base (unlikely)
The vendor is backing up filegroups or files on different days (maybe a VLDB) and sending to you as they happen (unlikely scenario)
The vendor is sending you daily full backups, in which the last one you got is the most recent (most likely)
Excluding file and filegroup restores, you will:
Restore the full backup
Restore the last differential backup, if applicable
Restore any log backups that happened after the last differential backup, or since the last full backup if differentials aren't happening
You could also choose any differential in step 2, and then all log backups since that differential if it exists. Naturally, if they are only sending you full backups, then you simply need to restore the full backup only.
Something along these lines... which will vary based on your environment (AlwaysOn, setting to read only, setting to STANDBY, etc...)
ALTER DATABASE AdventureWorks SET SINGLE_USER WITH ROLLBACK IMMEDIATE
GO
RESTORE DATABASE [AdventureWorks] FROM DISK ='\\UNC\AdventureWorks.back0'
WITH
--what ever options... but likely a file move
MOVE 'data_file_1' TO 'E:\somefolder\data.mdf',
MOVE 'db_log' TO 'E:\somefolder\log.ldf',
REPLACE, --overwrites the database
RECOVERY --sets the DB to READ/WRITE. Use NORECOVERY if you need to restore logs / differentials
GO
--if using logs...
RESTORE LOG AdventureWorks
FROM '\\UNC\AdventureWorks.back01' --assuming this is a log
WITH FILE = 1, --this is the first log
WITH NORECOVERY; --keep in norecovery to restore other logs...
GO
etc...
|
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
Third party is sending us split backups that we need to restore everyday into single database. Is there a way to automate this which is low maintenance?
Preference is to do this writing either T-SQL or a SQL Server job:
AdventureWorks07182018.back0
AdventureWorks07182018.back1
AdventureWorks07182018.back2
AdventureWorks07182018.back3
|
Restore a database from multiple .bak files SQL Server 2012 [closed]
|
Yes, that will work just fine, you can directly back up the entire work directory if Nexus is not running.
|
I'm using the docker container to run Nexus 3.
If I stop the container, then backup the volume directory for /nexus-data will this result in a valid backup?
Just want to confirm.
Thanks!
|
Nexus 3 Backups - Can I just backup /nexus-data?
|
1
FOR /F "usebackq tokens=3" %%s IN (`DIR C:\ /-C /-O /W`) DO (
SET FREE=%%s
)
ECHO %FREE%
Remember to change C:\ to the external drives letter.
Share
Improve this answer
Follow
answered Jun 28, 2018 at 11:00
Bernhard ErikssonBernhard Eriksson
51733 silver badges1111 bronze badges
3
Could you maybe Explain what each command does? im still new to this and trying to learn as much as i can
– Wade
Jun 28, 2018 at 11:57
1
@Wade, to find out what each command does, open a Command Prompt, cmd.exe, window and type the name of the command followed by a forward slash and question mark. e.g. FOR /? or DIR /? or SET /? or ECHO /?`
– Compo
Jun 28, 2018 at 12:00
If you are running a batch file from a partition for which you need to know the free space, you can use %~d0 instead of the drive letter DIR %~d0\ /-C /-O /W
– Grygorii
Oct 19, 2023 at 16:34
Add a comment
|
|
I'm busy automating backups with bat scripts. Can I compare the available space on the external to the size of the backup data?
Want to make sure there is enough space for data that needs to back up and if there isn't I need it to send me a mail. ( i know how to send emails and stuff, just need to know how to check the disk space)
|
Check if drive has enough space for backup
|
1
Unfortunately when it comes to MySQL there are a lot of different solutions that give different results. One issue I used to face all the time was table relationships with foreign keys spit out errors when importing because of table dependencies.
Take a look at this answer to know more. But generally I would use mysqldump as it's the most widely known method.
Share
Improve this answer
Follow
answered Jun 22, 2018 at 6:49
PhaeslebiPhaeslebi
3399 bronze badges
Add a comment
|
|
Im looking for the right way to backup mysql database.
When i do dump to my database the size is 40 gb of the sql file.
Mabye the right way is to do dump for each table with bash script loop?
Thanks for the help...
|
The right way to backup mysql database?
|
from man rsync:
--delete delete extraneous files from dest dirs
rsync will not delete source files.
|
I am using rsync in my bash script to backup a websites 'public_html' folder to a local destination folder on my computer.
Here is my rsync code:
if rsync -zavx -e 'ssh -p22' \
--numeric-ids \
--delete -r \
--link-dest=../"$yesterday" "$site_source" "$site_dest";
then
...
else
...
fi
What I am confused by, is what privileges rsync has to delete files. I want to ensure that its not possible for any 'source' files to be deleted by the script and limit any local deletion to a single folder area.
I've been reading over the docs and I see --exclude and --filter, but these appear to be only able to be used to exclude and filter which files are synced.
If anyone can point me in the right direction, or potentially explain what privileges rsync has to 'source' files, that would be great!
|
rsync privilages to delete source and destination files
|
You set connection: local for the playbook, so everything you do is executed locally (which is correct for ios_... modules, but not what you actually want for copy module).
I'd recommend to define ansible_connection variable in your inventory per group of hosts/devices, so Ansible will use local connection for your ios devices, and ssh for backup-server.
|
I'm new to all the Ansible stuff. So most of the time I'm in "Trial and Error"-Mode.
Now I'm facing a challenge with a playbook and I do not know to look further.
The main task of this playbook should be to get a "Show run" from a Cisco Device and save this in a text file on a backup server (which is a remote server).
The only task, which is not working, is the Backup Task.
Here is my playbook:
- hosts: IOSGATEWAY
gather_facts: no
connection: local
tasks:
- name: GET CREDENTIALS
include_vars: path/to/all/all.yml
- name: DEFINE CONNECTION TO GW
set_fact:
connection:
host: "{{ inventory_hostname }}"
username: "{{ creds['username'] }}"
password: "{{ creds['password'] }}"
- name: GET SHOW RUN
ios_command:
provider: "{{ connection }}"
commands:
- show run
register: show_run
- name: SAVE TO BACKUP SERVER
copy:
content: "{{ show_run.stdout[0] }}"
dest: "path/to/Directory/{{ inventory_hostname }}.txt"
delegate_to: BACKUPSERVER
Can someone hint me in the right direction?
|
Ansible-Playbook - Save output to a remote server
|
1
there are a couple of components that come with Azure backup, each of them has a specific use case as below:
1- Azure Backup (MARS) agent
Back up files and folders on physical or virtual Windows OS (VMs can be on-premises or in Azure)
No separate backup server required.
2- System Center DPM
Application-aware snapshots (VSS)
Full flexibility for when to take backups
Recovery granularity (all)
Can use Recovery Services vault
Linux support on Hyper-V and VMware VMs
Back up and restore VMware VMs using DPM 2012 R2
3- Azure Backup Server
App aware snapshots (VSS)
Full flexibility for when to take backups
Recovery granularity (all)
Can use Recovery Services vault
Linux support on Hyper-V and VMware VMs
Back up and restore VMware VMs
Does not require a System Center license
4-Azure IaaS VM Backup
Native backups for Windows/Linux
No specific agent installation required
Fabric-level backup with no backup infrastructure needed
For full info regarding Which Azure Backup components should I use?: checkout the following link: https://learn.microsoft.com/en-us/azure/backup/backup-introduction-to-azure-backup
as suggested by @Vikranth S, MARS would be your best option for the use case you've described.
-Adam
Share
Improve this answer
Follow
edited Nov 22, 2017 at 23:38
answered Nov 22, 2017 at 22:06
Adam Smith - Microsoft AzureAdam Smith - Microsoft Azure
2,49522 gold badges1212 silver badges2424 bronze badges
Add a comment
|
|
I have used Azure backup service to backup single laptop/desktop over the WAN. However what if I have 100 laptop to be backed-up.
Have someone used Azure backup service to protect multiple laptop and desktops?
|
How can i utilize Azure backup service to backup multiple laptop/desktops
|
It is not recommended to install backup (MABS) and site recovery on the same machine since both Services are using similar tasks there will be lot of conflicts.
|
We are evaluating Azure ASR for cloud Backup and Site Recovery.
Started a month ago with Backup services to backup files, folders and SQL servers and everything worked fine, MABS is installed in server A.
We have also added a physical server to Site recovery, installing the Azure site recovery on the same server A. Since then, every dashboard referring to Backup information has not been updated though its clear that GRS Storage is used for backups.
Tried to uninstall/re-install MABS without any success, any ideas? there are no errors in the MABS MMC console.
|
Azure Recovery Services Vault is not refreshing backup information
|
1
You can map your OneDrive as a shared Drive on each machine. But in this case all data can be overlooked by administrator. Yes, like you said.
Have a look at OneDrive For Business probably. They have a licensing model per user. From security perspective this is much better for you.
Share
Improve this answer
Follow
answered Jul 5, 2017 at 10:22
AlexAlex
27311 silver badge55 bronze badges
Add a comment
|
|
I need Files and Folders backup of 300 Windows 8+ System. I opted One Drive Solution. So do i need to buy 300 License of One Drive for Business or Can i buy 1 license and can create and share 300 folders for each person?
In any of the option, the user can upload data to their space but admin can see all the folders.
|
Microsoft One Drive solution for Files and Folders - Around 300 Systems
|
1
You can use split to split big file into small pieces. More detail refer to split manpage
One example :
I have one tar file, named "test.tar.gz", which is "25M";
use split to split it into 3M small files;
split -b 3M test.tar.gz pdf
~you can change 3M to 324K~
result is :
$ ls
14474.pdf pdfaa pdfab pdfac pdfad pdfae pdfaf pdfag pdfah pdfai test.tar.gz
$ du -sh *
3.0M pdfaa
3.1M pdfab
3.0M pdfac
3.0M pdfad
3.1M pdfae
3.1M pdfaf
3.1M pdfag
3.1M pdfah
908K pdfai
25M test.tar.gz
Share
Improve this answer
Follow
edited Feb 25, 2017 at 19:07
Bertrand Martel
44.1k1616 gold badges142142 silver badges164164 bronze badges
answered Feb 25, 2017 at 18:26
慕冬亮慕冬亮
34911 gold badge22 silver badges1111 bronze badges
2
Hi,</br>Thanks for your reply. </br> I need to split it in 320KiB Multiples. Do yo have any idea of how to do it?
– Felipe Fuchs
Feb 26, 2017 at 2:46
Just change 3M to 324K. You can split your file into 320KB small multiples.
– 慕冬亮
Feb 27, 2017 at 15:27
Add a comment
|
|
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Good Evening,
I have a linux server and want to backup it to OneDrive for business. My problem is that the file size changes each day and I need to split it in 320KiB multiples.
Any ideas?
Thanks in advance,
Felipe Liberman Fuchs
|
How to split file in 320KiB Multiples [closed]
|
The files clearly have a timestamp already:
1480431448_gitlab_backup.tar
The bold is the unix time for the backup
|
Doing my gitlab backup the backuped files have:
no timestamp
should be like this: The filename will be [TIMESTAMP]_gitlab_backup.tar
here the files::
root@gitlab:~# ll /mnt/backup-git/ -h
total 1.9G
-rw------- 1 git git 57M Nov 29 15:57 1480431448_gitlab_backup.tar
-rw------- 1 git git 57M Nov 29 15:57 1480431473_gitlab_backup.tar
-rw------- 1 git git 452M Nov 30 02:00 1480467623_gitlab_backup.tar
Here my configuration values for the backup::
$ grep -i backup /etc/gitlab/gitlab.rb | grep -v '^#'
gitlab_rails['backup_path'] = "/mnt/backup-git/"
gitlab_rails['backup_keep_time'] = 604800
To create them, following the documentation here, (omnibus installation):
root@gitlab:~# crontab -l | grep -v '^#'
0 2 * * * /opt/gitlab/bin/gitlab-rake gitlab:backup:create CRON=1
|
gitlab backup : no timestamp for backup filesnames
|
1
Yes you can achieve that using AWS Snowball but you will not get the data in hard drive but you will get it in a Snowball appliance owned by AWS.
Check this guide https://docs.aws.amazon.com/AWSImportExport/latest/ug/getting-started.html?console_help=true
Share
Improve this answer
Follow
answered Jul 21, 2016 at 13:05
Piyush PatilPiyush Patil
14.2k66 gold badges3939 silver badges5656 bronze badges
2
I see. So a step further which is what I am thinking would be to put it in inexpensive hard-drives that once sent, the customer can keep.
– 719016
Jul 21, 2016 at 13:53
Yes you will have to put that in a hard drive
– Piyush Patil
Jul 21, 2016 at 13:55
Add a comment
|
|
Is there a service that will take a list of Amazon S3 urls from the customer, download and copy the files into one/many hard-drives, then ship the hard-drives to the customer for a fee?
I am thinking about something like Amazon Snowball, but that sends hard-drives to the customer, and these can simply be shelved when they arrive at the customer's destination.
EDIT: It looks like Amazon Snowball and Amazon Import/Export Disks implement some of the features, but the client (me) still has to do some of the work. So I guess I am after a company that does these middle man extra steps and just ships the final disks to the client where they are shelved.
Any ideas?
|
Amazon-S3 urls to shipped hard-drive service?
|
1
I followed the advice of St3ph and tried revision history. Not exactly what I meant, but an acceptable solution nonetheless.
Share
Improve this answer
Follow
answered Oct 2, 2015 at 7:24
TeebsTeebs
17111 silver badge1616 bronze badges
Add a comment
|
|
At our company we have a Google Spreadsheets which is shared by a link with different employees. This spreadsheet is saved on a Google Drive to which only I have access. The link is configured as such that anyone with the link can edit the spreadsheet since all employees need to be able to make changes to the file.
Although this is very useful, it also presents a risk in the form of data loss. If a user were to (accidentally) delete or alter the wrong data and saves the file, this data is permanently lost.
To prevent this I was wondering if it is possible to automatically have a backup created, say every day. Ideally, this backup is saved in the same Google Drive. I know I could install the desktop client and have the file backed up by our daily company backup, but it seems a bit ridiculous to install it for just one file. I'm sure there has to be another solution to this, ie with scripts.
|
Backup automatically
|
1
Sorry, I am unfamiliar with WebDAV. I can provide an answer for remote storage of the archive over ssh. Maybe you can apply this for WebDAV.
To store an archive remotely over ssh, use
tar cvf dirToArchive | gzip -c | ssh user@remoteHost 'cat > ~/archive.tgz'
Share
Improve this answer
Follow
answered Aug 23, 2015 at 23:58
Danny DaglasDanny Daglas
1,50111 gold badge99 silver badges99 bronze badges
Add a comment
|
|
guys! I have a problem: i need to backup a big folder - 35GB, but i have just a 5 GB free disk space. I think i have in my head right algorithm, but I have little knowledge in bash scripting:
Packing with tar
Splitting it
Get the current splitted filename and upload it via WebDAV
Delete it
Then split more -> upload an so on
So i think i have to do it with piping |, but i don't know how. Please help me, if you know.
Maybe you know another method.
|
Backup huge .tar with low free disk space
|
1
I'm sure the problem is my ignorance of linux, python and cassandra, but I haven't found enough information to make it work or a step by step document
Being blunt here: yes. You've got the answer to your own question. It's complicated to get used to all of that at once, but a step-by-step document won't help you a bit. Really. You need to be familiar with what you're doing, or else you won't be able to do something useful.
To compare: Installing cassandra is like buying a dentist's chair. Even with a very precise step-by-step information on how to set it up and how to place a patient on it, you'll be a terrible terrible threat to your patient's teeth if you have no education as a dentist before.
Cassandra is a mighty tool for large, ditributed systems. Someone who develops for that or even just administrates that needs to have very solid understanding of how to work with his computer in the environment that cassandra runs in. Get yourself used to linux. Then read a lot about cassandra. Then that project is on your level, and you will have success!
Share
Improve this answer
Follow
answered Jun 11, 2015 at 7:39
Marcus MüllerMarcus Müller
35.3k44 gold badges5656 silver badges9797 bronze badges
1
Thanks Marcus. I take care of what you say in your comment and your answer. I'm trying to do what you say to achieve the knowledge I need, but this a long way and I have to practise too in order to learn. Yes, it was my ignorance: the process was working right but there was no IN_MOVED_TO event. So now the question is: how could I backup a complete keyspace as it is now? I have tried with the -B option of tablesnap but nothing is uploaded to S3.
– Janbalik
Jun 11, 2015 at 8:42
Add a comment
|
|
I'm trying to use tablesnap to make backups but without success. I'm using Ubuntu 12.04 and after trying the installation of tablesnap as it is described in github, I'm not able to do it. I guess this is due to fact that the package is for Maverick, so I have tried to copy the code and execute it but again without success. It always display the message "INFO Starting up" and seems nothing happen.
I'm sure the problem is my ignorance but, could you help me? Do you know about any document or example of installing and using for backup and recovery?
UPDATE:
The problem was me. Tablesnap was working but there was no IN_MOVED_TO event. So, now, what I'm trying to do is to backup a complete keyspace. I have tried with the "-B" option of tablesnap but still nothing is uploaded to S3. Any idea?
|
Tablesnap not working
|
With some help, i finaly discover that Rsync could be used as a daemon with preconfigured destinations :
On my debian side, by creating a /etc/rsyncd.conf containning
lock file = /var/run/rsync.lock
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
[documents]
path = /home/juan/Documents
comment = The documents folder of Juan
uid = juan
gid = juan
read only = no
list = yes
auth users = rsyncclient
secrets file = /etc/rsyncd.secrets
hosts allow = 192.168.1.0/255.255.255.0
/etc/rsyncd.secrets
rsyncclient:passWord
user:password
Do not forget
chmod 600 /etc/rsyncd.secrets
And then launch
rsync --daemon
After that, i can finaly view rsync destination when configuring Backup on my Nas.
Source : http://www.jveweb.net/en/archives/2011/01/running-rsync-as-a-daemon.html
|
I want to backup my Lacie OS 3.x NAS 4TB on a remote server using the native web interface.
The best solution for me would be to use rsync, unfortunatly i do not have ssh shell access on the disk.
I tried to backup my device with a "compatible rsync server" but without success :
Going to backup > New Backup, Network backup, selecting all my shares, Rsync compatible server.
I'm typing working ssh credentials of my debian backup server (which have rsync 3.0.9) and it doesn't list any rsync destination so i can't continue the backup shcedule.
The web interface also provide a solution on a "NetBackup Server", but i don't know how I can install it on Debian (not sure it's the symantec product).
Also, the NAS provide a working SFTP access, but i only want to backup modified files (Because backup 4TB each time is a bit greedy).
Any solution ?
|
Backup a Lacie 2 Big NAS on a remote linux server
|
You can not access the contacts, SMS from the desktop. You have the ability to sync the content of the emulator with the Additional Tools that the emulator comes with. Read this post it shows how to sync the content of the emulator to the hard disk of your desktop. Other than that, you have no access to the contacts, SMS from the desktop.
|
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I need to access Windows Phone 8 contacts/pictures/sms messages from Windows desktop/store app by using c#, when phone is connected with usb cable. How can I do this ?
|
How to access Windows Phone 8 contacts from desktop app [closed]
|
You could maintain a second bare repo anywhere you want (like an usb disk), and push automatically on both remotes at once.
See "pull/push from multiple remote locations".
That or you can setup a script which does a backup, which is called in Git a bundle.
I have written recently a script which does:
a full backup if the last one is over a week
an incremental backup (if new commits have been done)
You can run that script at any time: if no new commits have been created, it won't do anything.
The idea behind bundles is to save a repo as one file (much safer than saving a large number of files in a .git tree, and easier to copy around).
|
I am recently using a remote git server (my own computer), and the server malfunctioned and can't be remotely accessed.
I was wondering if there is a proper way to maintain TWO remote depositories, so that the second depository on my other computer can be automatically used when the first one fails?
My second question is how to automatically (e.g. via a script) to synchronize the primary and backup remote depositories periodically, and how can I change which one is the default/primary depo?
My daily use case is pretty simple, as follows:
git clone user@host:A.git
cd A
(modify code ..)
git add .
git commit -m "..."
git push
git pull ....
So, there is no branching and conflicts is not my concern here. The primary and the backup(s) repos should just be snapshots of the same work at different points of time. Some of them may be lagging behind because of accessibility issues.
P.S. I am not asking the basics of how to set up a (bare) remote depo, and do regular push-pulls using ssh. What I am asking is if there is a way to automatically switch between a primary and a backup repo when one of them fails.
Thanks.
--- EDIT ---
To clarify, my question boils down to this:
Is there a way to setup two remote repositories like a mirroring RAID (disk array), where
a pull request pulls from the the most up-to-date repo, and a push request pushes changes to all live mirrors.
If one of the repos went down, and come back again, it will pick up the changes accumulated in other repos.
Note: there is an old post here, which asks a related question. But the answers there did not address the synchronization problem, and what happens when multiple repos become out-of-sync.
|
Maintaining and synchronizing a mirror remote Git repository
|
I'll be working on one soon, but WinSCP has a CLI that you can use to accomplish this:
http://winscp.net/eng/docs/commandline
There are plenty examples using WinSCP on stackoverflow and on other sites, like:
Batch file when using WinSCP and command prompt
Problem in executing the batch file in winscp
You can use this url get more examples:
https://stackoverflow.com/questions/tagged/batch-file+winscp
Please keep in mind WinSCP isn't the only solution for this, it's the one that I use at work :).
|
Closed. This question is seeking recommendations for software libraries, tutorials, tools, books, or other off-site resources. It does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for software libraries, tutorials, tools, books, or other off-site resources. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
Can anyone direct me to a script solution that makes a backup of a directory and uploads it to an ftp server?
I was searching for a batch script initially but any solution would do, as long as it is open source.
Thanks in advance,
Jean
|
Backup and upload to FTP server [closed]
|
You can use any subversion software like Tortoise SVN or GIT..
You need to checkout the code from the domain the code is hosted upon.. Every colleague of yours will commit their code on the same domain and you can use "SVN merge" option to merge their code.
You can read more the following link:
http://en.wikipedia.org/wiki/TortoiseSVN
|
I am working in one company there are 15 people working i have to merge their work in my PC everyday.
Its very time consuming can you provide some good resource which could work in LAN with good performance i am sorry if i have added this question on a wrong place but i have no ideas about it my colleague told me that there are some free resources but i don't find any when i search on Google thanks in advance....
|
How to take a backup in lan?
|
If you reload a file in GDrive, it becomes a new file, also if its name is unchanged, so it is assigned a new id.
It's not clear where are you making a backup of the GDrive files:
if you back them up on a local hd, you loose any link to the GDrive object. In this case, it is possible to use the path/file name as a unique key, when you want to rebuild the database.
If you make a copy in GDrive, the copied files get a new id, but retain their "Description" (an attribute that you can edit manually or using a script. In this case, you can store in the description the original id, so you will be able to retrieve it when you restore the file, and update your database with the restored ids.
|
i have a doubt about Google Drive Files.
I have an account with several files that i save and access from a web app, every time i upload a file to GDrive, the service gives me an id which i use to download the file later on. The question is, How can i make a backup of all my files without changing their ids?
I can export my database with the ids... and i can make a copy of the files too, but if something happens, and i have to upload all the files over again, the ids would not be the same!!! so i will not have access to them...
Does anyone know how can i do this??
Thank you in advance!
|
Can i move drive files with their ids?
|
1
https://bitcalm.com - is what your are looking for.
Remote web-based dashboard to configure and manage your backups for
single or even multiple Linux machines
Daily incremental backups for files and/or databases
Single click restore feature
you can either use their Amazon s3 storage or provide credentials for your cloud storage to save the backups.
Share
Improve this answer
Follow
answered Jun 21, 2015 at 16:36
Yury AndreykovichYury Andreykovich
4622 bronze badges
Add a comment
|
|
Is there any simple backup tools for linux which can be operated remotely via web i.e is there any offline backup tool with web based gui?
|
Webbased backup tools for linux
|
Something like this might work:
$filename = "C:\path\to\your.bak"
$chunksize = 20480
$totalsize = (Get-Item $filename).Length
$stream = New-Object IO.FileStream($filename, [IO.FileMode]::Open)
$reader = New-Object IO.BinaryReader($stream)
$chunk = New-Object byte[] $chunksize
$size = $chunksize
$i = 0
do {
$rest = $totalsize - ($i * $chunksize)
$i++
if ($chunksize -gt $rest) {
$size = $rest
$chunk = New-Object byte[] $size
}
$reader.Read($chunk, 0, $size) | Out-Null
[IO.File]::WriteAllBytes(({0}.{1:d3}" -f ($filename, $i)), $chunk)
} until ($chunksize -gt $rest)
$reader.Close()
$stream.Close()
|
I need to split database .bak into several files, using powershell. How best to do it?
Any examples?
|
Splitting .mdf database into several files
|
1
tar cvf $(<"$1")
This will take the first argument from the command line, read its contents, and use those as the argument to tar.
Share
Improve this answer
Follow
answered Mar 19, 2013 at 3:54
John ZwinckJohn Zwinck
244k3939 gold badges328328 silver badges443443 bronze badges
2
Great, can i also add $2 etc for additional args ?
– user2184862
Mar 20, 2013 at 5:57
Try "$@" if you want to use all arguments. I'm not sure if it will work directly here, or if you'll need to loop over the arguments.
– John Zwinck
Mar 20, 2013 at 12:51
Add a comment
|
|
I have a backup script that reads from user input and compresses files and then tar.gz's them, but does anyone know the syntax to get a bash script to accept a argument from a file.
Eg
bash script is like
tar cvf /arg1
and in another config.txt file it would look like
enter location
/home/user/
And the usuage would be something like, backup.sh config.txt and i would pull the config locaion file and input it into the script and execute it.
Any ideas ?
Cheers
|
Backup script bash that reads a external config file
|
Don't use Unicode, use Windows-1252 encoding:
chcp 1252
set destination=e:\backup\utorrent\%date%backup\
mkdir "%destination%"
copy "d:\Programok\utorrent\aktuális\*.dat" "%destination%"
|
I just want a simple script to backup some files with task scheduler, but copying just wouldn't work in a batch file.
I want something like this:
chcp 65001
set destination=e:\backup\utorrent\%date%backup\
mkdir "%destination%"
copy "d:\Programok\utorrent\aktuális\*.dat" "%destination%"
But even this doesn't work in a batch, but works when I enter the commands manually in a cmd window.
chcp 65001
set destination=e:\backup\utorrent\%date%backup
mkdir "%destination%"
copy "d:\Programok\utorrent\aktuális\settings.dat" "%destination%\settings.dat"
|
My windows batch script for copying files doesn't work, why?
|
When you perform an online restore, DB2 must lock the tablespace(s) you are trying to restore. The restore process essentially overwrites the file on disk containing the tablespaces' data. This is incompatible with applications using data in the same tablespace while the restore occurs.
If your database has all data in a single tablespace, then an online restore is not particularly useful. If you have multiple tablespaces in the database, applications may be able to continue functioning while the corrupted tablespace(s) are restored, but of course this requires some planning in your application and database design.
|
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I am trying to do ONLINE RESTORE ROLLFORWARDING TO END OF LOGS AND COMPLETE. When I run this command I am getting error SQL1035N The database is currently in use. It does not allow any connections to it!
If I deactivate database and then run command I can restore but then my database is not available for users but it should be becuase it is live production system 24/7. How to resolve this?
|
DB2 online restore - “SQL1035N The database is currently in use.” [closed]
|
Backing up an Access databse is just making a copy of its .mdb file (and I suggest you store more than one copy - for example 10 last copies). Usual filesystem functions should work. As to where store the backups... it's up to you. How would we know?
|
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I am making an inventory system. I need to make backup of database file which is ms access database file. I need to make backup frequently so that data could be preserved. Please give me some information on how will I make backup and also guide me to where to store it as backup file.
Thanks in anticipation.
|
Make Backup of Data(Ms Access database) from Java Desktop Application [closed]
|
Many text editors will create a backup copy of the prior version when you do a save.
Of course, this is pitiful compared to an actual version control system. You should know that many VCS integrate with editors so commits are very simple quick commands.
The minor time it takes to create a repository is insignificant compared to the time it will save you during the project.
Frankly, this sounds like an argument from ignorance.
|
Just something that will save changes automatically, while i'm editing say in gedit, or notepad plus plus, or even windows text editor, etc.
I can't seem to find exactly what I'm looking for and svn, bzr, and Git are too complicated. One should be able to start a new project, start writing code, and that's it!
So... I'm going to create a whole new version control system that will be more amazing and simple than all the rest! Unless something already exists? Whether it be online, or a local install, whatevs.
EDIT: Ok, the above paragraph was a bit absurd now that I read it much later. I use Git now, and Git is awesome.
|
Are there any apps that save backup versions of a file with one click (save, commit, etc) live while editing?
|
0
You cannot open standalone OST files - unlike PST files, they are tied to a particular mailbox and particular Outlook profile. If you mean using Outlook Object Model, there are numerous resources, including questions on Stack Overflow. You can start at https://github.com/Hridai/Automating_Outlook
You'd need to create an instance of the Outlook.Application object, recursively loop through the folders starting at Application.Session.Folders, and process each folder to save messages (MailItem.SaveAs) and/or attachments (Attachment.SaveAsFile).
Share
Improve this answer
Follow
answered Feb 27 at 18:46
Dmitry StreblechenkoDmitry Streblechenko
64.6k44 gold badges5353 silver badges8383 bronze badges
Add a comment
|
|
I use a personal folder (.ost) file that contains specific folders, and I save relevant emails into those folders. I want to save the content of each folder recursively to my hard drive, that contain the same folder structure.
If this could have a GUI that populates based on the folder names, and I could select the ones I want, from say a list box that would be awesome.
No idea what library to use that is easy and free...
Any help here would be appreciate, I am newish at Python, so keen to learn.
Thank you for your help!
I have searched the net and only found exchange type email integration, but that is not what I need. There does not seem to allot about this, else my search skills is, well, crap.
|
Python - Outlook Personal Folder (.ost) integration - Save emails from folders to HDD
|
Unlike Azure DevOps Server/TFS, there is no built-in method to backup the data of organization on Azure DevOps Services. And there seems is no method/tool to backup the whole organization data at once.
You can try to search whether there are any 3rd-party tools that can backup each individual service (Azure Repos, Azure Boards, Azure Pipelines, Azure Test Plans and Azure Artifacts) from Azure DevOps Services organization. You can try to search for some keywords like as "Backup Azure DevOps organization data" using Microsoft Bing or Google to look for the possible 3rd-party backup tools.
Generally, the 3rd-party backup tools are not free, and you may need to pay for some licenses on the tools.
|
Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 months ago.
Improve this question
At the moment I have an organization in DevOps with several associated projects, the idea is to be able to protect my data due to internal backup policies, I understand that Microsoft has data redundancy between its datacenters but I want to make external backups and how they would be restored in case of a catastrophe.
Any feedback would be helpful, thank you very much. Additionally, I'm using Azure DevOps Services.
|
Can I back up my DevOps organization outside of Microsoft Azure servers? [closed]
|
0
Finally, I have solved this problem.
In general
mkdir pure_source_code
cd pure_source_code
git clone ../repositories/<your_account_name>/*
For bash script
#uncompress_it.sh
directories=$(ls -d ../1772b8d2-550f-11ee-803e-4a843c258251/repositories/<your_account_name>/*)
for directory in $directories; do
git clone $directory
done
bash uncompress.sh
Share
Improve this answer
Follow
answered Sep 21, 2023 at 13:57
yingshao xoyingshao xo
26988 silver badges1313 bronze badges
Add a comment
|
|
The GitHub backup zip has following structure:
But I don't know how to extract it to get the real source code.
When I do a normal extraction, all I get is the second screenshot, which is not the real souce code.
I did a search on Google, but could not find a solution.
It seems like GitHub also does not have docs about this question.
Does anyone know how to do it?
|
How to unpack GitHub backups to get the real source code?
|
0
If you are logged into chrome with a google account, All of your chrome data should sync automatically. If not, there are options to export bookmarks and saved passwords from chrome settings. Hope this helps!
Share
Improve this answer
Follow
answered Jun 28, 2023 at 10:12
Daniel CrunkhurnDaniel Crunkhurn
311 bronze badge
Add a comment
|
|
I have a ton on settings and accounts in chrome in my old PC. Now, I really need to move the data to new PC.
Is there any way to do it ? I really afraid of losing data. :((
Please give me an suggestion to transfer my chrome data to new PC :D.
|
Is there any way to transfer my chrome data to new pc?
|
rsync -aAXv --rsync-path="sudo rsync" -e ssh \
--exclude={"/swapfile","/var/lib/*","/var/cache/*","/usr/*","/sbin/*","/lib/*","/home/admin/go/*","/home/admin/downloads/*","/home/admin/.sdkman/*","/home/admin/.oh-my-zsh/*","home/admin/.rbenv/*","home/admin/.nvm/*","/home/admin/.npm/*","/home/admin/.m2/*","/home/admin/.gvm/*","/home/admin/.cache/*","/dev/*","/boot/*","/proc/*","/sys/*","/tmp/*","/run/*","/mnt/*","/media/*","/lost+found"} \
[email protected]:/ \
~/aws_machine
-e ssh: specifies that the rsync command should use the ssh protocol for the transfer.
--rsync-path="sudo rsync" fixes the permission denied for non-user owned files and folders
|
Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 10 months ago.
The community reviewed whether to reopen this question 10 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I wanted to backup an aws machine and exclude os files, only user files including root owned ones which need sudo permission to read.
|
How to rsync a linux server to a local directory excluding all os files [closed]
|
From AWS Point of View the data transfer has to be paid, regardless which way the data is travelling.
For synchronizing AWS DataSync can be used, if Wasabi has a compatible interface (NFS, SMB, HDFS, S3). According to this Wasabi has S3 compatible interface
There are file systems that treat S3 as a file storage, but I would not advise you for that solution. They are complicated to setup and not for this use case.
|
Closed. This question is seeking recommendations for software libraries, tutorials, tools, books, or other off-site resources. It does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for software libraries, tutorials, tools, books, or other off-site resources. You can edit the question so it can be answered with facts and citations.
Closed last year.
Improve this question
I need to make a backup from Wasabi bucket to Amazon S3 bucket each week. Basically, transfer data between this 2 clouds.
My current solution is to download data from a source bucket to a local directory (virtual machine), and then uploading it to the destination bucket. The problem with this approach is the cost of the intermediate step, especially for large amounts of data (Transfer the data and Save it).
Is it possible to connect two buckets directly and transfer data between them without this intermediary step? Like move files from disk to another disk in Windows 11.
|
Looking for cloud-to-cloud data transfer tool [closed]
|
0
Crontab lets you execute 1 scheduled command/script at a time.
Piping the output of your script to Grep command won't work. Furthermore, crontab by default redirects output to dev/null, therefore you won't see the output unless you save it to a file.
I suggest something like this:
Edit your script to redirect it's output to a file with your grep command. For example by adding
DATE=$(date +"%m_%d_%Y")
some command | grep -v Warning >> /tmp/$DATE.log # Here
Edit your Cron job to execute the script every day like you did, removing everything after the pipe:
45 5 * * * /home/username/barc/backupsql.sh
In order to monitor the output you could use tail command as follows:
tail -f /tmp/$DATE.log
Share
Improve this answer
Follow
edited Oct 24, 2022 at 8:47
answered Oct 24, 2022 at 8:43
SaisukSaisuk
522 bronze badges
Add a comment
|
|
I want not to save the logs that are "warning" in the log file that the crontab creates, I only want the "error" messages, does anyone know how I can exclude these messages?
I have tried doing a grep -v but it doesn't work:
45 5 * * * /home/username/barc/backupsql.sh 2>&1 | grep -v 'Warning: Using a password on the command line interface can be insecure.'
Thanks in advance for anyone trying to help me.
|
I want not to save the logs that are "warning" in the log crontab
|
0
Use "fsutil volume diskfree c:" to get size left on drive, make it a variable using "set" and compare it to 30
EDIT:
Alternative is "dir|find "bytes free""
Share
Improve this answer
Follow
answered Jul 8, 2022 at 9:05
AliceAlice
2622 bronze badges
0
Add a comment
|
|
I have a script that I use for backing up.
Currently it does not error if there isn't enough space and reports everything was fine.
Is there a way I can add a storage check at the start and only allow it to run if over 30gb is free.
Then error if not.
I'd rather use cmd than powershell
|
Code to make batch file check HDD space and only run if 30gb or more is available
|
0
@echo off
setlocal
set "Datestamp="
set /a Datestamp=%1 2>nul
if not defined Datestamp (
for /f %%x in ('wmic path win32_localtime get /format:list ^| findstr "="') do set %%x
set /a DateStamp=Year*10000+Month*100+Day
)
then
d:
etc., etc. from your code, but replace %1 with %Datestamp%
How it works:
The setlocal ensures that changes made to the environment are discarded when the batch ends.
The first set of datestamp "sets" the value to nothing, which causes datestamp to be undefined.
If you supply a parameter, this will be used for your datestamp. If you don't, an error message will be produced which will be suppressed by the 2>nul and datestamp will remain undefined.
If d:
0 is undefined, the d:
1 instruction runs d:
2 and picks those lines that contain d:
3 using d:
4 (the d:
5 "escapes" the pipe d:
6 and tells d:
7 that d:
8 is part of the parenthesised instruction, not of the d:
9)
This will produce something like
%10
and these variables will be %11 by the %12 instruction that the %13 executes.
Then calculate the datestamp as YYYYMMDD (20220612)
It's to your advantage that the date is expressed in this manner as the directories then produced will be listed by %14 sorted in order. using the format you suggest will not do this and is confusing. Is %15 in MMDDYYYY order, or DDMMYYYY order, for instance?
So - these revisions will allow the batch to be run as it always has run, but omitting the parameter will cause it to calculate the current date and use that.
Then you can use %16 to eun the batch with no parameter and it all proceeds automagically.
Share
Improve this answer
Follow
edited Jun 12, 2022 at 10:23
Stephan
54.8k1010 gold badges6060 silver badges9494 bronze badges
answered Jun 12, 2022 at 10:12
MagooMagoo
78.5k88 gold badges6464 silver badges8787 bronze badges
Add a comment
|
|
@echo off
d:
cd mljewel.12
rar a -r mlyedek.rar
cd\
f
cd yedek
md %1
z:
cd yedek
md %1
copy D:\MLJEWEL.12\mlyedek.rar f:\yedek\%1
copy D:\MLJEWEL.12\mlyedek.rar z:\yedek\%1
del D:\MLJEWEL.12\mlyedek.rar
pause
hello i have bat file saved as backup.bat and i have to run this file every day with date tagged with it. So what i do is I press win+R and write "backup 06062022" date goes to %1 in the code and a folder named 06062022 gets created.
basicly what i need is to backup a certain folder everyday on startup with any indication that contains the date
|
running a backup .bat file every day
|
So issue was with systemd unit for php-fpm 7.4, where ProtectSystem was set to true, after commenting it out, everything worked as expected.
sed -i 's:ProtectSystem=full:#ProtectSystem=full:' /usr/lib/systemd/system/php-fpm7.service
|
I am trying to implement simple backup feature of some directories (mainly directories in /etc) which is handled by laravel. Basically I store .tar archives containing specific directory files.
This is a command used to create a backup archive of a single directory:
shell_exec("cd {$backupPath} && tar -cf {$dirName}.tar -P {$fullPathToDir}")
This is a command to restore directory from a backup archive:
shell_exec("cd / && sudo tar -xf {$backupPath . $dirName} --recursive-unlink --unlink-first")
For test reasons I let http user run sudo tar, however my initial idea was to create a bash script that will handle that, and add it to sudoers. Running command or shell script gives same errors.
The problem is if I run it through php I get errors like this:
Cannot unlink: Read-only file system
But, if i run it from command line, it works:
su http -s /bin/bash -c "cd / && sudo tar -xf {$backupPath . $dirName} --recursive-unlink --unlink-first"
Running this both on full archlinux system and archlinux docker container gives me same results. I would appreciate any kind of help.
|
Running tar command from php causes Cannot unlink: Read-only file system
|
0
There are several steps to accomplish the task:
Press Windows key + R to open up a Run command in the search bar right next to the Windows icon. Then, in the Run command, type “ms-settings:storagesense” and hit Enter to open the Storage screen.
On the right side under "Local Storage", click on the C drive (or the drive letter that holds your Windows files) concerned.
Choose "Temporary files".
Then choose to remove "Previous version of Windows" along with any other options you desire.
Click on "Remove" files.
Check that the "Windows.old" folder has indeed been removed from your C drive.
Share
Improve this answer
Follow
answered Sep 12, 2021 at 7:43
Kris SternKris Stern
1,25011 gold badge1616 silver badges2424 bronze badges
Add a comment
|
|
I tried using the command lines but to no avail, as I encountered the error "Permission Denied", even after taking ownership of the folder, etc.
I would like to add if you would like to reserve the possibility to roll back to a previous version of Windows, then please do not attempt this folder removal, otherwise all records of your previous Windows will remain after the removal.
This post is for an emergency fix in the case the existence of your "Windows.old" file may cause problems in some scenarios.
|
How can I safely delete Windows.old folder on Windows 10?
|
0
Your best bet remains json.loads() to convert it to a dict.
However, the above string isn't a valid json format, the below is ...
{"NetworkInterfaces": [{ "AssociatePublicIpAddress":false, "DeleteOnTermination":false, "Description":"Primary network interface", "DeviceIndex":0, "Groups":["sg-xyz"], "Ipv6AddressCount":0, "Ipv6Addresses":[], "NetworkInterfaceId":"eni-xyz", "PrivateIpAddress":"0.0.0.0", "PrivateIpAddresses":[{"Primary":false,"PrivateIpAddress":"0.0.0.0"}], "SecondaryPrivateIpAddressCount":0, "SubnetId":"subnet-xyz", "InterfaceType":"interface" }]}
Would suggest to use python re module to convert the string to something similar, which could be later consumed by json module
Adding in a sample code to sanitize the data
str = """
'NetworkInterfaces':'[{"AssociatePublicIpAddress":false,"DeleteOnTermination":false,"Description":"Primary network interface","DeviceIndex":0,"Groups":["sg-xyz"],"Ipv6AddressCount":0,"Ipv6Addresses":[],"NetworkInterfaceId":"eni-xyz","PrivateIpAddress":"0.0.0.0","PrivateIpAddresses":[{"Primary":false,"PrivateIpAddress":"0.0.0.0"}],"SecondaryPrivateIpAddressCount":0,"SubnetId":"subnet-xyz","InterfaceType":"interface"}]'
"""
import re
str = re.sub('\'\[', '[', str)
str = re.sub('\]\'', ']', str)
str = re.sub('\'', '\"', str)
str = re.sub('^\n', '', str)
str = re.sub('\n$', '', str)
str = "{" + str + "}"
import json
d = json.loads(str)
print(d["NetworkInterfaces"][0]["AssociatePublicIpAddress"]) # False
Share
Improve this answer
Follow
edited Sep 1, 2021 at 7:53
answered Aug 31, 2021 at 12:39
ushangmanushangman
4644 bronze badges
1
How can we convert the string to valid json format using re module as you have mentioned?
– Rahul Chavan
Sep 1, 2021 at 1:57
Add a comment
|
|
This is the AWS backup restore metadata
I have the below key-value pair data where value has whole string contains multiple data which need to be changed like false to true or true to false.
NetworkInterfaces key has the string value which has multiple key-value pairs which I am not able to change as it is a whole string.
'NetworkInterfaces': '[{
"AssociatePublicIpAddress":false,
"DeleteOnTermination":false,
"Description":"Primary network interface",
"DeviceIndex":0,
"Groups":["sg-xyz"],
"Ipv6AddressCount":0,
"Ipv6Addresses":[],
"NetworkInterfaceId":"eni-xyz",
"PrivateIpAddress":"0.0.0.0",
"PrivateIpAddresses":[{"Primary":false,"PrivateIpAddress":"0.0.0.0"}],
"SecondaryPrivateIpAddressCount":0,
"SubnetId":"subnet-xyz",
"InterfaceType":"interface"
}]'
|
Key-Value has the data with string value however value string as multiple data which needs to be changed
|
0
Create a folder inside the XAMPP www folder and put all the files of
WordPress in that folder.
Browse to phpmyadmin and make a database. Then import the database files that you received from the import tab of the database you've just created.
After you imported the database files, find a table that ends with _options click on it and inside that you will find two records with the option_name of siteurl and home. Replace their option_value with the local URL of the WordPress site you have just made in XAMPP.
Go to your WordPress folders, find the wp-config.php file, open it and edit the details for your database connection.
if anything was not clear, ask in the comments.
Share
Improve this answer
Follow
answered Aug 24, 2021 at 9:00
arataarata
88911 gold badge88 silver badges2424 bronze badges
Add a comment
|
|
I am on Internship in IT Department of an Organization.
Right Now I am working on WordPress. I have created few Local Host Sites. Using XAMPP.
Now the manager gave me the backup files of their website and database file In a USB.
Please guide me how can I use that backup on my laptop locally. I am having trouble.
Please guide from copying those files from USB into my laptop to running the site locally.
Thanks.
|
How to use Backup Of Live WordPress Site Locally on another Laptop
|
Azcopy on linux does not support Azure table storage. For more details, please refer to here and here
If you want to use the azcopy to export Azure table, we need to use the azopy V7 on windows. For more details, please refer to here
Regarding how to do that, please refer to here
For example
Install Azcopy
Script
azcopy /Source:https://andyprivate.table.core.windows.net/log /Dest:https://andyprivate.blob.core.windows.net/copy/tablelog /SourceKey:<key> /DestKey:<key> /PayloadFormat:CSV
Besides, if your Azure table is very big, I suggest you use Azure data factory. Regarding how to do that, please refer to the official document and the official document.
|
I am writing a PowerShell core task in azure pipeline in order to backup my table storage using azcopy, from what I found only the version 7 of azcopy supports the table storage, my host is Linux and I can't find a command that works, I tried this but didn't work :
azcopy -source https://myaccount.table.core.windows.net/tablename --destination https://myaccount.blob.core.windows.net/containername --source-key $input1 --dest-key $input2
Any idea how the command should be? thanks
|
Using AzCopy for export table to blob storage in Linux
|
0
I finally answer to my own question, in case someone would have the same wonder.
There is no app that can incrementially update a bit-to-bit clone drive. Such app are making disk images, which are not immediatly bootable and need first to be installed (nevertheless more quickly than installing Windows).
The only way to update a ready-to-use backup system drive is to totally re-clone it regularly, for instance with a standalone cloning dock station.
I give generously myself a "+1" rating.
Share
Improve this answer
Follow
answered Mar 12, 2021 at 21:42
user3094822user3094822
511 gold badge11 silver badge33 bronze badges
Add a comment
|
|
I would have a question that will get a "-1" rating.
I once daily backup my data drive to a clone drive with SynckBack, which compares both disks and mirrors the copy to the original, just updating by adding/deleting files. Easy and fast, and the backup disk is a bit-to-bit clone of the original disk.
Although it's not exactly that, one can call this an "incremental" backup.
I would like to know if it's possible to do the same with my system disk, i.e. to once daily update a system drive copy in order to maintain a bit-to-bit clone that would be immediately bootable. Not by re-copying each time the whole system drive, but like for my data drive, just by adding/deleting once a day the slight amount of data which have daily changed.
Apart from building a RAID1 including my system disk, which implies letting the RAID running permanently, is there another way?
I didn't find any application that can bit-to-bit clone a system disk in such an "incremental" way.
|
System drive incremental clone
|
0
I tried the following code and its working fine for android 10 and 11
make sure to implement all requirements as per this : http://android-developers.blogspot.com/2013/10/getting-your-sms-apps-ready-for-kitkat.html
and Put following in your Mainactivity class
public void perm2(){
Context mContext3 = getApplicationContext();
if (Build.VERSION.SDK_INT > Build.VERSION_CODES.P) {
RoleManager roleManager = null;
roleManager = mContext3.getSystemService(RoleManager.class);
Intent roleRequestIntent = roleManager.createRequestRoleIntent(
RoleManager.ROLE_SMS);
startActivityForResult(roleRequestIntent, MESSAGE_CODE);
} else {
Intent intent = new Intent(Telephony.Sms.Intents.ACTION_CHANGE_DEFAULT);
intent.putExtra(Telephony.Sms.Intents.EXTRA_PACKAGE_NAME,
mContext3.getPackageName());
startActivityForResult(intent, MESSAGE_CODE);
}
}
Share
Improve this answer
Follow
edited Apr 28, 2021 at 22:53
answered Apr 28, 2021 at 18:52
247365 nyy247365 nyy
3366 bronze badges
Add a comment
|
|
I am working an SMS Backup app which stores all SIM SMS in an xml file then we can restore back, then proplem is when i want restore it needs to make my APP as Default SMS app, I Have tried all the solutions but didn't worked for me... i used this but all in vain e.g:
Intent intent = new Intent(Telephony.Sms.Intents.ACTION_CHANGE_DEFAULT);
intent.putExtra(Telephony.Sms.Intents.EXTRA_PACKAGE_NAME, myPackageName);
startActivity(intent);
My App is not showing in Select Default popup. Please Help.
|
Android- How can Make my app as Default SMS App?
|
When you docker rm a container, you delete the container filesystem, but you don't affect any volumes that might have been attached to that container. If you docker run a new container that mounts the same volumes, it will see their content.
You'd never back up an entire container. You do need to back up the contents of volumes.
A good practice is to design your application to not store anything in local files at all: store absolutely everything in a database or other "remote" storage. The actual storage doesn't have to be in Docker. Then you can back up the database the same way you would any other database, and freely delete and create as many copies of the container as you need (possibly by adjusting replica counts in Swarm or Kubernetes).
|
I' m new to Docker and still searching for a safe way to update production code without losing any valuable data.
So far the way we update our production machine is like this:
docker build the new code
docker push the image
docker pull the image (on the preferred machine)
docker stack rm && docker stack deploy
I' ve read countless guides about backups, but still can't understand if you lose something and what this is if you don't backup and something goes wrong. So I have some questions:
When you docker stack rm the container, you delete it? And if yes do I lose something by doing that (e.g volumes)?
Should I backup the container and its volumes (which i still don't understand how to do it), or just the image? Or just create a new tag when docker build my new code and I am safe?
Thank you
|
Docker - passing new content to production
|
0
Posts and Pages aren't stored as files in WordPress, they are stored in your database. If you open up your wp-config.php, you'll see what database tables/prefix your installation is using.
Depending on how you backed up your website, there may be a file (perhaps a .sql file, or .tar.gz file, or some proprietary file) somewhere, often in the base directory, or in the wp-content or wp-content/uploads directory. If you don't have a file like that somewhere, or just backed up the files straight from your hosting account (such as cPanel), you'll need to go back in there and download a backup of your database as well.
Also note that pages, posts, and other custom post types are stored in the wp_posts table, all together. If your site relies on custom fields, those will be stored in your wp_postmeta table.
Share
Improve this answer
Follow
answered Aug 1, 2020 at 21:29
XhynkXhynk
13.6k88 gold badges3333 silver badges6969 bronze badges
0
Add a comment
|
|
I have backup of my site on my pc and i'm trying to find posts and pages (my site content) but i can't find them, in what folder in backup should i search in please?
|
How to find posts and pages in the backup
|
Console case
static public void Main(string[] args)
{
Console.WriteLine("Starting background process, press ESCAPE to stop.");
var timer = new System.Threading.Timer(ProcessFiles, null, 1000, 2000);
while ( Console.ReadKey().Key != ConsoleKey.Escape ) ;
}
static private void ProcessFiles(object state)
{
Console.WriteLine("Copy files... " + DateTime.Now.ToLongTimeString());
}
You can also create a service:
https://learn.microsoft.com/dotnet/framework/windows-services/walkthrough-creating-a-windows-service-application-in-the-component-designer
https://www.c-sharpcorner.com/article/create-windows-services-in-c-sharp/
Output
WinForms case
Add a Timer from the Component tab of the Visual Studio Toolbox and set the desired Interval property in milliseconds hence 3600000 for an hour.
Double click on it to create associated the event method:
private void TimerCopyFile_Tick(object sender, EventArgs e)
{
LabelInfo.Text("Copy files... " + DateTime.Now.ToLongTimeString());
}
You need to start it somewhere or set the enabled property.
timer.Start();
Thus you can design your app as you want to offer for example settings and you can use a TrayIcon.
WPF case
In addition to the System.Threading.Timer there is:
https://learn.microsoft.com/dotnet/api/system.windows.threading.dispatchertimer
ASP.NET case
In addition to the System.Threading.Timer there is:
https://learn.microsoft.com/dotnet/api/system.web.ui.timer
|
I'd like to do a program which can make copies of a folder/subfolders to a specific destination in every hour. I've made a simple program which can copy files if I press a button, but its not scheduled and it forgets the source and the destination every time I open it. Could you please help me out? Thanks in advance!
|
Creating a backup-maker program in C# windows form
|
0
This is reinventing the wheel sort of thing.
There are some utility written for this kind of purpose, to name a few.
rsync
GNU cp(1) has the -u flag.
cp
For comparing files
cmp
diff
For finding duplicates
fdupes
rmlint
Here is what I've come up with re-inventing the wheel sort of thing.
rsync
0
Should be safe enough from files with spaces and tabs and new lines, but since I don't have files with new lines so I can't really say.
Change the action from the rsync
1 statement depending on what you want to do.
Ok maybe rsync
2 is a bit over kill, change it to rsync
3
Add rsync
4 after the shebang to see what's actually being executed, good luck.
Share
Improve this answer
Follow
edited Apr 18, 2020 at 12:13
answered Apr 18, 2020 at 11:14
JetchiselJetchisel
7,57322 gold badges2020 silver badges1818 bronze badges
Add a comment
|
|
I want to implement incremental backup in Ubuntu, so I am thinking of finding md5sum of all files from source and target and check if any two files have same md5sum then keep that file in destination else if different copy the file from source into directory.
I am thinking of doing this in bash
Can anyone help me with the commands of how to check md5sum of two files in different directories ?
Thanks in advance!!
#!/bin/bash
#
SOURCE="/home/pallavi/backup1"
DEST="/home/pallavi/BK"
count=1
TODAY=$(date +%F_%H%M%S)
cd "${DEST}" || exit 1
mkdir "${TODAY}"
while [ $count -le 1 ]; do
count=$(( $count + 1 ))
cp -R $SOURCE/* $DEST/$TODAY
mkdir "MD5"
cd ${DEST}/${TODAY}
for f in *;do
md5sum "${f}" >"${TODAY}${f}.md5"
echo ${f}
done
if [ $? -ne 0 ] && [[ $IGNORE_ERR -eq 0 ]]; then
#error or eof
echo "end of source or error"
break
fi
done
|
How to write script for incremental backup in Ubuntu?
|
I did the following:
Step1/ I set automatic for bash script on ubuntu (use "openproject" user)
#!/bin/bash
Date=`date +"%Y_%m_%d_%H%M"`
openproject run backup
mkdir /var/db/openproject/backup/op_$Date
mv /var/db/openproject/backup/*.gz /var/db/openproject/backup/*.pgdump /var/db/openproject/backup/op_$Date
Step2/ Use "FreeFileSync" to backup from ubuntu server -> dropbox/local host/....
Hoping userful for you!
|
I have installed PostgreSQL in Ubuntu Linux system for OpenProject project management tool.
I searched internet i installed pgadmin tool and i tried to backup.
I could backup, but it backups only the sql query in a file.
I want to backup the whole db example- myproject with all data in it.
|
How to backup postgresql database with data from windows or linux system?
|
0
If you replicate your on-premises VMs to Azure, in the event of disaster you can trigger failover with single click in the Azure portal, or you can use Site Recovery PowerShell to trigger a failover. Reference URL - https://learn.microsoft.com/en-us/azure/site-recovery/site-recovery-faq#is-failover-automatic
For your information, see this link - https://learn.microsoft.com/en-us/azure/site-recovery/site-recovery-backup-interoperability AFAIK, this is supported only in Azure to Azure scenario.
Share
Improve this answer
Follow
edited Apr 26, 2020 at 3:49
David Makogon
70.1k2121 gold badges144144 silver badges191191 bronze badges
answered Apr 23, 2020 at 12:52
SadiqhAhmed-MSFTSadiqhAhmed-MSFT
17144 bronze badges
Add a comment
|
|
We have on premises VMware Servers and configured scheduled VM backup in Azure Cloud Server. Could you please someone explain if our on premises hardware crashed, is it possible to run those backups from Azure cloud like secondary server to avoid down time.
|
Azure Disaster Recovery
|
0
try duplicity 0.8. version 0.7 outdated ..ede/duoly.net
Share
Improve this answer
Follow
answered Mar 24, 2020 at 8:55
edeede
1
1
(From the review queue, don't @ me!) Hello and welcome to StackOverflow! Unless you know for a fact that upgrading to version 0.8 will solve the problem, this sort of answer should be left as a comment. I know new users aren't allowed to leave comments, this is just advice for the future.
– GKFX
Mar 24, 2020 at 12:28
Add a comment
|
|
I have an issue with duplicity 0.7.06 and 0.7.19 backup tool when trying to auth using keystone V3 with OVH.
Duplicity creates only local backups in .cache/duplicity but does not push them to OVH swift object storage. There are no errors it just keeps storing them on local disk.
It was working before with keystone auth V2, credentials are the same. I was able to login to swift using those credentials but duplicity does not work...
Here's the command I am using with environmental settings:
export SWIFT_AUTHURL="https://auth.cloud.ovh.net/v3/"
export SWIFT_USERNAME="xxxx"
export SWIFT_PASSWORD="xxxxx"
export SWIFT_REGION_NAME="SBG"
export SWIFT_USER_DOMAIN_NAME="Default"
export SWIFT_PROJECT_DOMAIN_NAME="default"
export SWIFT_TENANTNAME="xxxx"
export SWIFT_AUTHVERSION="3"
export PASSPHRASE="xxxx"
HOSTNAME=$(hostname)
duplicity /home swift://${HOSTNAME}
Anyone had similar issue?
|
duplicity not pushing backups to OVH swift container
|
0
When you back up a database there are a few options. I think you are asking how to do a differential backup?
The information relating to differential backups can be found here:
https://learn.microsoft.com/en-us/sql/relational-databases/backup-restore/differential-backups-sql-server?view=sql-server-ver15
The first thing to note is that you need a full backup as the 'base' upon which you are to do a differential backup.
The code used to do a differential backup of the AdventureWorks Database to the standard SQL Server Backup location is the following:
Use Master;
Go
Backup Database AdventureWorks2017 to DISK = 'C:\Program Files\Microsoft SQL Server\MSSQL14.SQLEXPRESS\MSSQL\Backup\AdventureWorks2017_New.Bak'
With Differential
This will take a short amount of time as it will only back up the difference in the database since the last full back up.
Hope this helps.
Share
Improve this answer
Follow
answered Mar 2, 2020 at 9:23
Alchemical29Alchemical29
9433 bronze badges
2
thanks for your reply , i think i don't clarify the question.
– Naturehigh
Mar 3, 2020 at 1:32
Monthly , i will get a .bak file include about 100 tables and it is extracted (full backup) from another big database (more than 100 tables), the .bak file doesn't have any history like insert、delete etc .and now , a mission is making a difference backup between monthly .bak file and deliver to another user, but i dont have any idea for this now , these is my totally question , and please kindly help me , thanks.
– Naturehigh
Mar 3, 2020 at 1:49
Add a comment
|
|
I am working on SQL Server 2017. I have took the backup and restore the database.
is there any way that I can make a difference backup whether the data between these two continue .bak (example: one is making on june-1 , another is making on july-1)?
or any script(tool) that I can run or check [could be table by table] and export difference backup?
|
Make a difference backup between two .bak or database and .bak
|
0
Wow, that quite a collection! Provided there's no way of using IMAP. although i haven't tried to do this myself thunderbird could quite possibly do it, as i don't believe there is a limit along as you don't run out of disk space or RAM, and Attachments will be compressed.
Share
Improve this answer
Follow
answered Feb 19, 2020 at 23:50
LogicalDevLogicalDev
1133 bronze badges
Add a comment
|
|
How to download all emails (632,000 emails) from a POP server? Currently MacOS Mail limits me to 200,000 emails. Is there a client capable of doing the job without limitation? I do not have access to the server configuration, I am a user.
|
How download 600k emails from a POP3 server
|
First made an Xtrabackup using this command:
xtrabackup -u root -H 127.0.0.1 -p 'supersecretpassword' --backup --datadir=/data/mysql/ --target-dir=/xtrabackup/
xtrabackup -u root -H 127.0.0.1 -p 'supersecretpassword' --prepare --datadir=/data/mysql/ --target-dir=/xtrabackup/
Then uploaded to S3 bucket using this command:
aws s3 sync /dbbackup s3://tmp-restore-bucket/
From the DR server in the other region, ran this command to download the xtrabackup straight to the db data folder after removing the existing db data files. This is the fastest way.
aws s3 sync s3://tmp-restore-bucket /data/mysql/
Finally start mysql on the DR server, and start your slave sync again using the command given in one of the xtrabackup files you created.
Super easy and the best and fastest way I've found.
|
Just to give you an idea, we have a DR db server in another region of AWS (Oregon), from the master (Virginia). We had an issue where replication broke, and we have to do a dump and restore.. we are talking about 3 tb of data.. so making a backup, creating an AMI, moving it across, dumping it back to a volume and then restoring is a lot of work. I am doing an rsync across ssh, and it is taking forever.. I estimate 2 days for the task to complete.. The data is an xtrabackup - so all db tables, and files basically..
Has anyone come across this issue, and what is the best way to transfer such massive amounts of data in the shortest amount of time? Believe me, I have thought of S3 etc.. but don't have the experience in transfer speeds to/from buckets across regions etc. Any ideas?
|
AWS EC2 rsync between regions xtrabackup folder
|
Try this:
function makeCopy() {
var folder=DriveApp.getFolderById("1y66aE2WuaRQQM5fyevXEl5uhJamk9VF7");
var ss=SpreadsheetApp.getActive();
var file=DriveApp.getFileById(ss.getId());
var f=file.makeCopy(folder);
var copy=SpreadsheetApp.openById(f.getId());
var shts=copy.getSheets();
SpreadsheetApp.getUi().alert('Go to other sheet to authorize Import. Hit Okay after authorizing the Import');
shts.forEach(function(sh){
var rg=sh.getDataRange();
var dvA=rg.getValues();
sh.clearContents();
rg.setValues(dvA);
});
}
I think the problem was that we needed to authorize the import on the other sheet. Try it. See if it works on your spreadsheet.
|
I'm looking for a way to automatically backup a google sheets file without the functions, only the numbers.
I have scavenged 2 scripts from various sources:
// Abhijeet Chopra
// 26 February 2016
// Google Apps Script to make copies of Google Sheet in specified destination folder
function makeCopy() {
// generates the timestamp and stores in variable formattedDate as year-month-date hour-minute-second
var formattedDate = Utilities.formatDate(new Date(), "GMT", "yyyy-MM-dd' 'HH:mm:ss");
// gets the name of the original file and appends the word "copy" followed by the timestamp stored in formattedDate
var name = SpreadsheetApp.getActiveSpreadsheet().getName() + " Copy " + formattedDate;
// gets the destination folder by their ID. REPLACE xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx with your folder's ID that you can get by opening the folder in Google Drive and checking the URL in the browser's address bar
var destination = DriveApp.getFolderById("1ysgERhjmrnq5Uzb9Lu7CtqOWccVTHyVj");
// gets the current Google Sheet file
var file = DriveApp.getFileById(SpreadsheetApp.getActiveSpreadsheet().getId())
// makes copy of "file" with "name" at the "destination"
file.makeCopy(name, destination);
}
The other is:
function getRangeValues() {
var sheet = SpreadsheetApp.getActiveSheet();
var range = sheet.getRange("A2:B4");
var values = range.getValues();
return values;
};
Data from multiple sheets files are pulled into 1 master file, and I wanna have backups every week of every single one of them, but problem is with the 'makeCopy' function the data pulled into the copy of the master file will be coming from the original sheets, because i'm using the importrange function which requires unique sheets ID, and the copy has another ID. How can I combine these 2 together?
|
Numbers only Google Sheets automated backup
|
Got it.
I needed to switch around the ordering of the includes and excludes
duplicity \
--include='/home/MINE/Shareable' \
--include='/home/MINE/Pictures' \
--exclude='**' \
--volsize 10 \
--s3-multipart-chunk-size 5 \
--s3-use-new-style \
--asynchronous-upload \
/home/MINE/webapps/webapp2 \
s3://s3.amazonaws.com//MYBACKUP/MYMACHINE/apps/MINE/webapp2 \
--full-if-older-than 30D
|
I am running across an error happening when I decide to include other folders in my backup, and I am wondering how I can correct it.
My full duplicity command is
duplicity \
--include='/home/MINE/Shareable**' \
--include='/home/MINE/Pictures**' \
--volsize 10 \
--s3-multipart-chunk-size 5 \
--s3-use-new-style \
--asynchronous-upload \
/home/MINE/webapps/webapp2 \
s3://s3.amazonaws.com//MYBACKUP/MYMACHINE/apps/MINE/webapp2 \
--full-if-older-than 30D
If I do not include the --include arguments, it runs fine, once they are included I receive a FilePrefixError: /home/MINE/Shareable**.
Yes, all these paths do exist. To me, it seems as if duplicity is expecting the included paths to be relative to /home/MINE/webapps/webapp2 rather than absolute paths.
How can I correct this?
NOTE
duplicity \
--exclude='*' \
--include='/home/MINE/Shareable**' \
--include='/home/MINE/Pictures**' \
--volsize 10 \
--s3-multipart-chunk-size 5 \
--s3-use-new-style \
--asynchronous-upload \
/home/MINE/webapps/webapp2 \
s3://s3.amazonaws.com//MYBACKUP/MYMACHINE/apps/MINE/webapp2 \
--full-if-older-than 30D
ends with a new message:
Last selection expression:
Command-line include glob: /home/MINE/Pictures**
only specifies that files be included. Because the default is to
include all files, the expression is redundant. Exiting because this
probably isn't what you meant.
|
Duplicity FilePrefixError When Including Absolute Path
|
As we discussed in the comments to the original question, /dev/sdX is simulated devices with iSCSI protocol. To manage those you would normally use iscsiadm command.
|
I have a folder on my server that mounts volumes of a FreeNAS via the iSCSI protocol. I need to mount these same folders on another server but I can't figure out how they were mounted because the naming in FreeNAS and the folders are different.
Are there any commands I can use to see how they were assembled? Using the df command I have the following return:
/dev/sde 1008G 605G 352G 64% /mnt/folder1
/dev/sda 1008G 150G 808G 16% /mnt/folder2
/dev/sdf 4,0T 4,0T 0 100% /mnt/folder3
But this is not useful since I can't figure out which volumes these mounts are referencing.
I'm Using Debian GNU/Linux 8.9 (jessie) and FreeNAS 9.10.2.
|
Command to know what was mounted in a folder
|
0
Please try following steps.
Install WP-CLI
Follow: https://wp-cli.org/#installing
Go to the Wordpress site folder in terminal
Using: cd /path-to-wordpress/
Configure wp-config.php with new database password
Use below command to replace your existing siteurl to new one.
wp search-replace "existing-url" "new-url"
If you are working on localhost new-url will be localhost/relative-path-to-wordpress.
Check site-url value in wp_options table for existing-url
Share
Improve this answer
Follow
answered Oct 24, 2019 at 17:26
Ayus MohantyAyus Mohanty
39011 silver badge88 bronze badges
Add a comment
|
|
I have recently a problem with my website which was built on WordPress. Therefore I decided to install xampp on my Ubuntu 18.04 and try to tackle the issue on a localhost. I did make a new user using "privileges" tab on localhost/PhpMyadmin and did "sudo mv wordpress /opt/lampp/htdocs". So my old website could be linked to the new database. My old website is now accessible through http://localhost/wordpress/ but it does now show all the contents and lack many things. It also does not let me to enter the Admin area through the login address. I should mention that While my webpage was working properly I did change the login address from .../wordpress/wp-admin/ to .../wordpress/loginWebName.
|
Unabel to login to wordpress's admin area when tried to install the backup on Xampp (ubuntu 18.04)
|
It depends on how you want to recover. If you want to restore a specific node, you need a backup from that node.
If you are rebuilding your swarm cluster from an old backup, then you only need one healthy node's backup. See the following guide for performing a backup and restore:
https://docs.docker.com/engine/swarm/admin_guide/#back-up-the-swarm
If you restore the cluster from a single node, you will need to reset and join the swarm again on the other managers since you are running a single node cluster. What is restored in that scenario are the services, stacks, and other definitions, but not the nodes.
|
From the official docker doc, there is a statement (as below) looks confusing to me. From my understanding, don't we only need to pick anyone of healthy manager nodes to backup for future restoration purpose?
"You must perform a manual backup on each manager node, because logs contain node IP address information and are not transferable to other nodes. If you do not backup the raft logs, you cannot verify workloads or Swarm resource provisioning after restoring the cluster."
Link: https://docs.docker.com/ee/admin/backup/back-up-swarm/
|
Backup Docker Swarm - How many Manager Nodes Required
|
0
There are multiple ways to do this.
You can write a batch script which uses aws cli to push your logs to s3. You can schedule a task which will execute this script. You also have to attach IAM role with s3 permissions to your windows EC2.
You can also stream this logs to cloudwatch using cloudwatch agent. Then you can stream this logs directly to s3. ex. Using CloudWatch Logs Subscription Filter
Share
Improve this answer
Follow
answered Feb 22, 2019 at 19:06
Shreyash SolankeShreyash Solanke
35111 silver badge99 bronze badges
1
Thanks Shreyash. The first option that you mentioned seems to be interesting. Do you also use such a script for pushing logs to S3? If yes, can you please share that with me?
– Sumit S
Feb 25, 2019 at 6:43
Add a comment
|
|
Like I already mentioned above, I am looking for a way to automate log backup for Windows Server to AWS S3.
|
Is there any way to automate Window Server 2012/16 EC2 instance log backup directly to AWS S3
|
0
You read the file, line by line and use something like shutil.
A quick example:
from shutil import copyfile
for line in open('files.txt', 'r'):
filename=line.split()[0]
dest="folder/"+file
copyfile(filename, dest)
I am sure there is a more pythonic way to do this.
dest is where the file will end up
Edit:
Maybe you want to use move from shutil instead.
Share
Improve this answer
Follow
answered Dec 3, 2018 at 14:48
user10639761user10639761
1
@seánReidy. Post what you have tried in your initial question and maybe include example of your file. My code successfully moves a file to a new destination on my system.
– user10639761
Dec 3, 2018 at 15:02
Add a comment
|
|
I have a .txt file with a number of files.
On each line in the .txt file is there is a full path to a file.
How to copy and dump the files into a folder i.e. /home/admin
|
Copy a list of file in a text file to a folder, using python
|
0
To save time and space, Apps aren't actually backed up, only their data is. When you restore from back up, apps are simply re-installed from the App Store. As a result, apps that aren't in the App Store (such as apps you are developing yourself) aren't restored. You will need to re-install them using Xcode.
Share
Improve this answer
Follow
answered Nov 16, 2018 at 0:33
Paulw11Paulw11
111k1414 gold badges165165 silver badges195195 bronze badges
Add a comment
|
|
I have installed a debug-app into my iPhone device and did backup.
However the backup app does not show on iPhone once I wipe device and restore from iTune.
Does backup/restore only support app installed from Apple store?
Thanks,
Jef
|
App is not able to restore from backup on iPhone
|
0
Just copying the file?
import os
os.popen('copy .\\file1.txt .\\test\\file1.txt')
If you're trying to move ALL .pptx files in a directory to a backup, something like this might work for a more, hassle-free method:
import os
from os import listdir
from os.path import isfile, join
path = "./"
included_extensions = ['pptx','PPTX']
allFiles = [f for f in listdir(path) if any(f.endswith(ext) for ext in included_extensions)]
length = len(allFiles)
for i in range(length):
os.popen('copy .\\'+allFiles[i]+' .\\backup\\'+allFiles[i])
Share
Improve this answer
Follow
edited Oct 18, 2018 at 20:53
answered Oct 18, 2018 at 20:43
Mark CookMark Cook
17911 silver badge1111 bronze badges
Add a comment
|
|
I am currently trying to work on a program to automatically copy a PowerPoint into another file when ran however I cannot use shutil as it cannot handle pptx files any suggestions would be appreciated.
|
Copying a powerpoint file
|
0
Thank you guys for helping me out.
I found the solution to my question and i thought the knowledge's worth sharing
Here is my complete code which I've found the perfect for the job and convenient
@echo on
ROBOCOPY "I:\MJDrive" "d:\DriveBackup" /e /z /xo /tee /mt:4 /R:5 /W:5 /it /LOG:d:\Backup.log
start "" "D:\Backup.log"
1- I like to keep the echo on to be able to have a look at any time on whats going on
2- The /it option is the real charm to handle the database type of files to stay updated
3- Finally it opens the log file to show the details of operation just done.
Hope it contributes to someone else.
Share
Improve this answer
Follow
answered Oct 16, 2018 at 5:37
junaid bashirjunaid bashir
1733 bronze badges
Add a comment
|
|
I'm looking for incremental backup of one of my folders in my USB flash drive to my pc
I'm using a ROBOCOPY command and has scheduled it on log on event through task scheduler
Here's the code I'm using in my bat file
ROBOCOPY "I:\MJDrive" "d:\DriveBackup" /e /z /xo /tee /mt:4 /R:10 /W:10 /xf /LOG:d:\Backup.log
Every thing is working fine with the other files except of the ms access files actually which I'm working on and especially which I'm doing all this for.
Why does it not work with ms access files.
Please help.
|
robocopy command not working with ms access files
|
0
No, It is not possible to run WordPress without the database.
What's more:
The only database supported by WordPress is MySQL version 5.0.15 or greater, or any version of MariaDB.
Why it is not possible?
Because as You can read in WordPress Documentation, It is using the database
for storing and retrieving the content of your blog, such as posts, comments, and so on.
https://codex.wordpress.org/Database_Description
Basing on that you downloaded files I think You will be able to restore only parts of your website on new installation.
Share
Improve this answer
Follow
answered Sep 19, 2018 at 8:10
AlexAlex
41977 silver badges1616 bronze badges
Add a comment
|
|
I have downloaded all files & folders from c-panel root directory but somehow I have deleted/lost the database from my end. So is there any way to get all content from my backup or am I able to live back my website without the database as well?
|
Need help to live WordPress website without database
|
Check the backup LSN data from your msdb.
select s.backup_set_id,
s.first_lsn,
s.last_lsn,
s.database_name,
s.backup_start_date,
s.backup_finish_date,
s.type,
f.physical_device_name
from msdb..backupset s join msdb..backupmediafamily f
on s.media_set_id = f.media_set_id
Most likely you are restoring the wrong file / Order.
|
I have a maintenance plan for full and transaction log backup. It takes a full backup every day at 9 PM evening, and takes a log backup the next morning at 8:30 AM.
When I try to restore the morning log backup after restoring the previous evening's full backup (9 pm), it throws an error
The log in this backup set is too recent
but there is no other backup taken in between those two.
Is there anything I can do?
SSMS v17.18.1
SQL Server 2016 (v13.0.1601.5)
|
SQL Server : restoring a backup results in "The log in this backup set is too recent"
|
0
If it has a '.bak' extension then is is already a 'bak' file, and SQL Server will be able to open it. What you are referring to is your default option for opening files of '.bak' type, which you can change by right clicking the file, going to 'Open with' and then choosing whatever program you would like.
However, there is likely no need to change this, as only SSMS can open and use the file anyway.
Share
Improve this answer
Follow
edited Jul 9, 2018 at 10:21
answered Jul 6, 2018 at 13:07
Rosie ThomasRosie Thomas
6355 bronze badges
3
1
You need sql server, you don't need SSMS as it's only a UI. It seems nitpicking, but there's a clear distinction.
– HoneyBadger
Jul 6, 2018 at 16:19
When I try to restore from the .BAK file where it says "opens with Notepad", I get the following error "Cannot open backup device. 'L:\...' Operating System error 2 (the system cannot find the file specified)". But when I restore from a clear .BAK file saying noting about opens with notepad, the restore process works fine.
– AlexDesta
Jul 9, 2018 at 12:09
I don't understand the difference between these two files. Do they or do they not both have the file extension '.bat'? If you are not clicking 'save' after you open in notepad, or altering the file extension, I wouldn't expect them to behave any differently (in fact, they shouldn't both be able to exist with the same name in the same place, as they should be identical).
– Rosie Thomas
Jul 10, 2018 at 13:08
Add a comment
|
|
I have a SQL Server .BAK file but it opens in Notepad. How can I change it into a .bak format?
|
SQL Server BAK file opens in Notepad
|
the problem was solved by update firewall and adding iptables for ftp passive mode !
|
I install backup manager and after config, I try to execute it with sudo backup-manager but there is an error :
Net::FTP>>> Net::FTP(3.08_01)
Net::FTP>>> Exporter(5.72)
Net::FTP>>> Net::Cmd(3.08_01)
Net::FTP>>> IO::Socket::IP(0.37)
Net::FTP>>> IO::Socket(1.38)
Net::FTP>>> IO::Handle(1.36)
Net::FTP=GLOB(0x558b37af3978)<<< 220 server ready - login please
Net::FTP=GLOB(0x558b37af3978)>>> USER XXXXXX
Net::FTP=GLOB(0x558b37af3978)<<< 331 password required
Net::FTP=GLOB(0x558b37af3978)>>> PASS ....
Net::FTP=GLOB(0x558b37af3978)<<< 230 login accepted
Net::FTP=GLOB(0x558b37af3978)>>> TYPE I
Net::FTP=GLOB(0x558b37af3978)<<< 200 TYPE is now 8-bit binary
Net::FTP=GLOB(0x558b37af3978)>>> CWD /
Net::FTP=GLOB(0x558b37af3978)<<< 250 OK. Current directory is /
Net::FTP=GLOB(0x558b37af3978)>>> PASV
Net::FTP=GLOB(0x558b37af3978)<<< 227 Entering Passive Mode (XX,XX,XX,XX,XX,8)
Net::FTP=GLOB(0x558b37af3978)>>> STOR 2mb_file.dat
Net::FTP=GLOB(0x558b37af3978)<<< 421 Timeout
Unable to transfer /var/archives/2mb_file.dat: Timeout
Unable to transfer test file
The upload transfer "ftp" failed.
I try to increse export BM_UPLOAD_FTP_TIMEOUT="30" to export BM_UPLOAD_FTP_TIMEOUT="3000" but same error after a time longer...
what's wrong?
|
backup-manager [Net::FTP] Timeout error when execute it
|
Let me explain a few things first.
Backing up the previous version
Firstly,you need to identify your current application's installation folder.You can create a registry key to save where the application is installed(You need to do this in the first-time installer of your application).For that you can use Registry.LocalMachine.CreateSubKey(@"SOFTWARE\ProductName\appPathHere").Then,in your new installer,you can read the registry key to get the path of the application.Then,what you can do is create a ZIP of that path/folder.For that you can use :
System.IO.Compression.ZipFile.CreateFromDirectory(pathofApp, zipFilePath);
This will backup the current application.You can even modify the file type/extension to give it your custom type/extension.
Installing the application
Read the registry key to get the path of the installed file.Delete it using System.IO.Directory.Delete(path, true).You can ZIP all your files and then make your installer extract it to the specific location.You can simply use :
System.IO.Compression.ZipFile.ExtractToDirectory(zipPath, extractPath);
Creating the installer
I suggest you create a winform or WPF application,design the UI and implement the above methods.
This is not the ideal way but it will give you an idea on how to get it done with basic knowledge.Hope it helps you
|
I am trying to create an installation program that will backup the previous version of a C# program before updating it. I'm using VS 2015, and have looked at the installer, advanced installer and InstallShield LE. I don't really know what I'm looking at, how to use custom actions, pretty much anything. Any advice or help would be appreciated.
|
Creating a C# installation package that backs up the previous version before updating.
|
0
From your description I guess that you have a physical backup, a copy of the data files like pg_basebackup creates.
If there is a backup_label file in the backup, and all the required WAL files are in the pg_xlog (or pg_wal) directory, then all you have to do is start the server on the data directory (pg_ctl start -D <directory here>) and wait until recovery has completed.
Then you can use pg_dump and pg_restore to extract the data from this new PostgreSQL cluster and import it into the destination.
Share
Improve this answer
Follow
answered Jan 30, 2018 at 9:02
Laurenz AlbeLaurenz Albe
225k1818 gold badges234234 silver badges303303 bronze badges
Add a comment
|
|
I am new in postgresql, and I have a big dataset which is a postgresql backup. I have problem to import this dataset to my PostgreSQL.
Actually, this is a "pgdata" format, consists of some files and folders. One of these folders (base folder) has all the main files, (2000 files, each of which is 1 GB). But all of these files are in the "file" format, with no extension!!
I would be so grateful if you could give me some advice on this issue and help me to restore this backup.
Best,
|
restore a PostgreSQL backup
|
I suggest you could backup your local files to Azure blob. Compared to a Azure VM, it is more cheaper, you only need pay for the storage account cost.
You could check this blog:backup data from Linux Servers to Azure blob Storage.
You could a task to upload your files to Azure automatic.
|
I want make an automatic Backup of our Samba(Ubuntu) Server.
This automatic backup should go over to an Azure(Ubuntu) Server.
So how must I configure our local server that it backup the Data and send it to Server at Azure?
Image
Edit:
The working Solution for me was with Azure Blob Storage and a cron job.
|
Backup Ubuntu Server with Azure
|
0
You can use a number of tools to set up an automated sync job. A cron job (Linux), a task in Windows Task Scheduler (Windows)
Following approaches rely on commands that pull the latest changes from the source repository to your local clone repo, and then mirror those changes to your AWS CodeCommit repository. They can be summed up as follows:
cd /path/to/your/local/repo git pull
git push sync --mirror
Please refer to this AWS Blog Post.
Share
Improve this answer
Follow
answered Jan 11, 2018 at 6:55
Kush VyasKush Vyas
5,89922 gold badges2626 silver badges3636 bronze badges
Add a comment
|
|
We recently started AWS and our local git is now hosted in AWS. I want to know how i can sync the code commits to a local gitlab where all the commits made will automatically come and sit in both code commits and my local repo.
Hoping for a good solution.
Thank you all.
|
How to sync aws code commits to local repository
|
0
Most likely the server had a corrupt file system and storage, that explains the failed update and backup.
Regarding changing the IP, the host had to restore to a different server rack or box. Each box has a unique IP address.
Share
Improve this answer
Follow
answered Nov 19, 2017 at 7:37
ScriptonomyScriptonomy
4,02511 gold badge1515 silver badges2323 bronze badges
1
Why would they have to restore to a different box instead of the same box? They told me they could find no problems or reason for the deletion. Wouldn't they know if the file system was corrupt? My site was a subdomain, one of many. Why would none of the other several sites (subdomains) have any problems?
– fullerm
Nov 19, 2017 at 7:44
Add a comment
|
|
I received an email stating that the Wordpress auto update for my site was aborted because the backup failed. When I go to check the site all files were deleted, everything. I ask the site host if they could restore from a backup. They said sure then changed the IP so that my site is inaccessible until the DNS propagates. Something seems very fishy. How could a Wordpress auto update remove all the files and why would a website restore require a new IP address?
I've had website restores before without requiring a new IP and I've never heard of Wordpress deleting itself.
The problem site was a subdomain, one of many, all on the same server. All other sites (subdomains and the main site) were working fine. The admins claimed that they could find no reason as to why the files were all deleted and despite repeated requests, gave no technical explanation for why they had to move the site to a new server.
|
Wordpress site deleted after failed auto update backup
|
0
The backups will fail and you will not be able to restore your TFS server in case of a corruption.
Share
Improve this answer
Follow
answered Oct 17, 2017 at 8:44
jessehouwingjessehouwing
110k2222 gold badges264264 silver badges358358 bronze badges
Add a comment
|
|
I would like to know what happens when the disk space on which the shared folder resides on which full and incremental TFS backups are saved. In my configuration, the disk where backups are made is not the disk where there is the TFS database. If there is insufficient disk space for continuous backups to work with TFS for various clients?
|
Low disk space TFS backup
|
0
Here it says that you can recover your data on other physical machine with the same configuration as your last one. Hope this helps..
Share
Improve this answer
Follow
answered Sep 19, 2017 at 11:32
HelenAHelenA
9611 silver badge1111 bronze badges
Add a comment
|
|
So a while back I made a backup image of my hard drive and put it on an external drive. Then someone stole my laptop so I no longer have that computer so is there away to restore without having the original computer.
|
Windows 10 System backup
|
0
my USD$0.02: There are several projects that I know of which attempt to backup k8s clusters at the metadata level
We focus on etcd backups, since our risk is wider than just the k8s descriptors, but as the adage goes: "they're not backups until you've tested them," which I (unfortunately) have not made the time to try
One also needs to exercise caution because any cluster state backups will contain cleartext Secret descriptors; so any backup solution needs to be aware of encryption or to skip over those descriptors
Share
Improve this answer
Follow
answered Sep 12, 2017 at 4:25
mdanielmdaniel
32.1k55 gold badges5656 silver badges5959 bronze badges
Add a comment
|
|
I would like to understand the existing problems in the field of openstack and kubernetes with respect to backup and restore. Any link or reference to any research related matter would also be helpful.
|
What are the challenges or problems in Backing up and restoring Open source softwares like openstack and kubernetes?
|
0
You can take a backup of on-premise SQL server by using Microsoft Azure Backup Server.
If you want to protect on-premises workloads, you should install the Microsoft Azure Backup Server on a Hyper-V VM, a VMware VM, or a physical host. The recommended minimum requirements for the server hardware are two cores, 4 GB RAM and WS 2012.
You can’t protect the on-premises workloads with Microsoft Azure Backup Server installed on an Azure VM. Microsoft Azure Backup Server running as an Azure virtual machine can only protect workloads that run across multiple Azure cloud services that have the same Azure virtual network and Azure subscription.
Share
Improve this answer
Follow
answered Aug 31, 2017 at 16:16
Vikranth SVikranth S
48155 silver badges1010 bronze badges
Add a comment
|
|
I want to backup my SQL Server residing on-premises to Azure using Azure Backup Server (one of the 4 available options i.e. MARS Agent, System Center DPM, Azure Backup Server and Azure IaaS VM Backup).
Is that possible? If yes, then how?
The link I referred to is this one.
But there is no mention where my SQL Server should reside if Backup Server is in Azure VM.
|
Backing up SQL Server on Azure Backup Server
|
This error usually comes when disk size more than 1TB. As of now, we are not supporting more than 1TB. If your environment has disks more than 1 TB, I suggest you remove the disks which are more than 1TB and then run the backup job.
|
For last few days, backup of a Linux VM in Azure is failing daily with following error message and code. We have not made any change in VM or Recovery Service Vault.
Error Code: CopyingVHDsFromBackUpVaultTakingLongTime
Error Message: Copying backed up data from vault timed out
Backup job completes the VM snapshot task successfully but transfer data to vault step runs endlessly for over 24 error and eventually fails with a timeout message.
|
Azure VM Backups failure: CopyingVHDsFromBackUpVaultTakingLongTime
|
0
Why don't u upload your mysql dump through phpmyadmin.....
Steps:
1) Go to Phpmyadmin
2) Go to Import
3) Select the dump
4) Upload it it database...
As u have already mentioned that u r using Xampp server, u can easuly upload dump through phpmyadmin
Share
Improve this answer
Follow
answered Jul 16, 2017 at 13:37
Utkarsh SaxenaUtkarsh Saxena
122 bronze badges
1
I want to do it programmatically, almost like having a save and load function within the program.
– Bryce
Jul 16, 2017 at 13:44
Add a comment
|
|
Below is the code I am trying to use restore a .sql dump to a MySQL database on a local XAMPP server. I just can't see what I am doing wrong. Can anyone help?
try {
String dbName = "Database";
String dbUser = "root";
String dbPass = "";
String[] executeCmd = new String[]{"/Applications/XAMPP/bin/mysql", " --user=" + dbUser, " --password=" + dbPass, " "+dbName, " -e", " source "+s};
Process runtimeProcess = Runtime.getRuntime().exec(executeCmd);
int processComplete = runtimeProcess.waitFor();
System.out.println(processComplete);
if (processComplete == 0) {
JOptionPane.showMessageDialog(null, "Successfully restored);
} else {
JOptionPane.showMessageDialog(null, "Error restoring");
}
} catch (IOException | InterruptedException | HeadlessException ex) {
JOptionPane.showMessageDialog(null, "Error" + ex.getMessage());
}
|
Restoring MySQL .sql dump not working when trying to local XAMPP MySQL server
|
0
it's safe
System Deny A and B are writing file same the time because file locked.
tar backup is reading the files and create new file . it's ok.but shutdown web server is the best way to backup files .
Share
Improve this answer
Follow
answered Jun 26, 2017 at 16:02
季文康季文康
1133 bronze badges
Add a comment
|
|
I have a question about performing routing backups on our webserver. Currently we are running Apache and we want to backup our doc root. I have a shell script that runs nightly and the command I use is: sudo tar cvzf filename targetFilename.
My question is is it safe to run a tar cvzf command on a doc root if files are being read, written, created? Is there a better way to do this? Is it a good idea to shut down Apache while creating the tar file?
I've done some research and I couldn't find a straight forward answer.
Thank you for your help!
|
How to backup docroot using tar
|
0
use git? import your project into a git repository and figure out a deployment process. One easy option is to ssh to the production server and run git fetch checkout etc.... Be careful to deny access to your .git folder if you do so!
Share
Improve this answer
Follow
answered Jun 22, 2017 at 6:45
Victor RaduVictor Radu
2,27211 gold badge1313 silver badges1818 bronze badges
Add a comment
|
|
I am going to start E -commerce website soon. I am new to manage live website. this website has lots of product details, product Images,source code in PHP and script files. After it will live there would be many updations, corrections and patching in code will be done. As it's an startup so i don't have any other resource to manage it. It will be manage by me only.
Here My question is what will be best method to manage source code updation and maintain logs for this to efficient traceable. should it will maintain line by line updation logs.
|
How to efficiently manage source code updation of website in Live environment and patching?
|
0
For Windows create below python script and copy it to the temp dir, just replace the path with your 'temp' :
import os
import shutil
if not os.path.exists('C:/Users/myuser/Desktop/day/folderA'):
os.makedirs('C:/Users/myuser/Desktop/day/folderA')
if not os.path.exists('C:/Users/myuser/Desktop/day/folderB'):
os.makedirs('C:/Users/myuser/Desktop/day/folderB')
sourcepath='C:/Users/myuser/Desktop/day'
source = os.listdir(sourcepath)
destinationpath = 'C:/Users/myuser/Desktop/day/folderA'
destinationpath2 = 'C:/Users/myuser/Desktop/day/folderB'
for files in source:
if files.startswith('a'):
shutil.move(os.path.join(sourcepath,files), os.path.join(destinationpath,files))
if files.startswith('b'):
shutil.move(os.path.join(sourcepath,files), os.path.join(destinationpath2,files))
listA = os.listdir('C:/Users/myuser/Desktop/day/folderA')
listA.sort()
listB = os.listdir('C:/Users/myuser/Desktop/day/folderB')
listB.sort()
Share
Improve this answer
Follow
answered Jun 2, 2017 at 9:39
monimoni
1644 bronze badges
0
Add a comment
|
|
I am taking backup/sync files in my computer to external harddisk.
for example I sorted some files in my external harddisk like this
(I have around 1000 directories and 10000 files, Directory structure below given is for illustrative purpose only)
folderA
-aa.jpg
-ab.mp3
-ac.mp4
folderB
-ba.jpg
-bb.mp3
-bc.mp4
and in my computer I have same files in a folder "temp"
aa.jpg, ab.mp3, ac.mp4, ba.jpg, bb.mp3, bc.mp4
Where I have
I want files in "temp" to be arranged like this
temp
--folderA
-aa.jpg
-ab.mp3
-ac.mp4
--folderB
-ba.jpg
-bb.mp3
-bc.mp4
Is there any tool, or script to do this for me (for 1000+ directories and 10000+ files)?
|
How to rearrange files with respect to a directory structure containing same files?
|
After some struggling I've found a workaround, so I'll post it here for others...
I've mounted the needed recovery point as a disk drive and started a file copy. It shows the standard Windows copy file progress dialog, which has an option of pausing. So after ~5.5 hours, just before the drive is unmounted, I paused the copy, unmounted the drive manually, mounted it again (getting another 6-hours slot), and than resumed the copy. Well, I don't think that this is how Microsoft wanted me to work, but it gets the job done.
Happy restoring!
|
I am trying to restore a big file (~40GB) from Azure backup. I can see my recovery point and mount it as disk drive so I can copy/paste the file I need. The problem is that the copying takes approx. 8 hours, but the disk drive (recovery point) is automatically unmounted after 6 hours and the process fails consistently. I couldn't find any setting in the backup agent to increase this slot.
Any thoughts how to overcome this?
|
Cannot restore big file from azure backup because of six hours timeout
|
0
You can do that if the following points are maintained :
Active Theme code should not be changed.
All the Plugins from previous installation need to be present in wp-content/plugins folder in new
Restore Mysql file into new database, and changed the db name in wp-config.php
Hope that will do.
Share
Improve this answer
Follow
answered May 13, 2017 at 10:03
TristupTristup
3,63311 gold badge1414 silver badges2626 bronze badges
Add a comment
|
|
I had a wordpress Site that it's down now , I have only mysql file . I want to restore my wordpress site with this file and i have not any more files to restore my old site, how can i do it ?
|
how can i restore a wordpress site by mysql file only?
|
0
Your path (@app/backups) is not writable! Check chmod!
Share
Improve this answer
Follow
answered Mar 11, 2019 at 17:03
Ihor MuravshchykIhor Muravshchyk
1
0
Add a comment
|
|
I want to add database backup and restore in my yii2 basic project .For this i have added extension Beaten-Sect0r / yii2-db-manager through composer.I have added following code in config/web.php:
'modules' => [
'db-manager' => [
'class' => 'bs\dbManager\Module',
// path to directory for the dumps
'path' => '@app/backups',
// list of registerd db-components
'dbList' => ['db'],
'as access' => [
'class' => 'yii\filters\AccessControl',
'rules' => [
[
'allow' => true,
'roles' => ['admin'],
],
],
],
],
I also created a writable directory named backup on app root directory.
Now how do i access backup functionality?Should i create view,model and controller to do so?I used url to access backup form like this localhost:8081/myproj/web/index.php?r=db-manager and its not working. Following Error appears.
|
database backup and restore yii2 using Beaten-Sect0r / yii2-db-manager
|
0
You don't need root access. root is an special account and all / most of the commands works without root account. Any account which has privileges to perform configuration changes, can apply the command to archive the configuration on given site.
See junos-os-login-classes-overview for user privileges.
The prompt we get is in this format: user@hostname> Ref
If there is no hostname defined, then it is just: user>
Once you make sure that you have logged in with correct user, i.e. it has requirement permissions, you should be able to execute those commands and apply archival configuration.
I have a working solution, let me know if above doesn't help.
Share
Improve this answer
Follow
answered Aug 20, 2019 at 22:24
Amit RaviAmit Ravi
122 bronze badges
Add a comment
|
|
Can't get to the root on juniper ssg5
After i enter my login username and password I'm stuck on this prompt
'my-fw->'
my-fw-> copy
^------unknown keyword copy
my-fw-> show
^------unknown keyword show
my-fw-> configure
^-----------unknown keyword configure
why can't i get to root@my-fw-> or root@my-fw-# prompt. What can i do to get to root. I'm using putty to console to the juniper ssg5.
[Note- I'm trying to backup config to a tftp server where i require to get to the root access]
|
Can't get to the root on juniper ssg5
|
goto phpmyadmin select your database go to sql tab than go to import browse your name.sql file and press go..it will upload your file
|
Hi I have done this page for the client to WordPress on localhost. I did a backup and I can not import a SQL database error pops up. All I have uploaded to the server. Now, only now I was ET database and I'm stuck here. Please help, I attach a screen.
I care about time, please and thank you!
|
backup SQL from localhost
|
Buy another domain and put the exact same code in ? and then make it private and if it goes down you can unprivate it ?
|
I have the following problem: If I have one website and the server drops, I can configure an backup website? Example: www.exemplesite.com goes offline because of techinical issues, so www.exemplesite2.com goes online, so my users don't have to wait until the first site goes online?
I want something like If> the site 1 goes down Then> the site 2 goes online until site 1 is down When the site 1 comes back to work, I put site 2 down again If I'm not being clear, I'm sorry, my english is very poor
|
How to switch sites in different servers
|
I'd use the split program to split the output from dd into different files. You can adjust the split size as you see fit (look at the 5000m argument):
dd if=/dev/sdb | split -b 5000m - /tmp/output.gz
This will yield files like /tmp/output.gz.aa, /tmp/output.gz.ab, etc.
Additionally, for further space storage, you can gzip your archive midstream, like this:
dd if=/dev/sdb | gzip -c | split -b 5000m - /tmp/output.gz
Later, when you want to restore, do this:
cat /tmp/output.gz.* | gzip -dc | dd of=/dev/sdb
|
I want to create an iso from an external hard drive.
I used this command:
sudo dd if=/dev/sdb of=usb-image.iso
It works, however, the disk is large (700 GB), and i dont have space on my laptop to store that much.
I was thinking about creating multiple iso files (each file 5 GB for example), this way, I can manage them by storing some parts on other drives.
Any help?
Thanks
|
Linux dd create multiple iso files
|
The message is very clear! Looks like the directory already exists!
Remove it or move it to a temp location.
To move it
mv /home/git/gitlab/tmp/backups/repositories /home/git/gitlab/tmp/backups/repositories.old
|
So I am trying to do a backup of git from source with this script: https://gitlab.com/gitlab-org/gitlab-ce/blob/master/doc/raketasks/backup_restore.md ( and the command: sudo -u git -H bundle exec rake gitlab:backup:create RAILS_ENV=production)
It works in the beginning but it stops while dumping the repositories.
What do I have to fix?
Dumping repositories ...
rake aborted!
Errno::EEXIST: File exists @ dir_s_mkdir - /home/git/gitlab/tmp/backups/repositories
/home/git/gitlab/lib/backup/repository.rb:136:in `prepare'
/home/git/gitlab/lib/backup/repository.rb:8:in `dump'
/home/git/gitlab/lib/tasks/gitlab/backup.rake:69:in `block (4 levels) in <top (required)>'
/home/git/gitlab/lib/tasks/gitlab/backup.rake:12:in `block (3 levels) in <top (required)>'
Tasks: TOP => gitlab:backup:repo:create
(See full trace by running task with --trace)
root@gitlab-test gitlab/public#
|
Command end with a Errno: :EEXIST Message
|
0
My suggestion is to leave the data encryption the same. Create a provider on a server and all requests for the data go to that server, no other entity has the encryption key. Change the authentication to the provider server on a periodic schedule.
If the data is valuable invest in an HSM (Hardware Security Module) and use it to secure the actual encryption key. One method is for the provider server to obtain the encryption only when decryption and then only in RAM, the key is never saved in a file. Another method, if the decryption requests are not to frequent and large, is for the HSM to perform the actual encryption and decryption, then the encryption key is never available outside the HSM. HSMs start at around $500 and up in price fast. There are levels of security ranging up to tamper-responsive: detect the intrusion attempt and destroy the contents in the process.
Share
Improve this answer
Follow
answered Jul 12, 2016 at 14:35
zaphzaph
112k2121 gold badges190190 silver badges229229 bronze badges
Add a comment
|
|
We do daily backup for some configuration files of many servers. Each conf file (compressed) is from 100KB to a few MB. Number of new files increased everyday is about 650. They are very important and confidential, so we encrypt each conf file with same pass phrase. However, we must change this phrase every 3 months. And old files can't be deleted, we need to re-encrypt all of them with new phrase. Currently, we have more than 300,000 files. They are stored in a network storage. It's very painful to decrypt and encrypt so many files every 3 months.
I was considering of using GPG:
gen a new GPG key
set a pass phrase for it, using pass phrase which is updated every 3 months
encrypt every conf file use this GPG key
3 months later
only change pass phrase of GPG key to latest one, no need to decrypt and encrypt all old files
But this seems insecure. All files can be decrypted use same GPG key with older pass phrase if some one have the old GPG database.
Is there any smarter way to do this kind of task? Thanks.
Backup task is running daily on one server, all encrypted files are saved to network storage. Only a few have encryption key and access to the backup server.
|
how to efficiently encrypt many files every several months use different passwords?
|
0
I thought I lost my files, but found out that after getting to disk utility, you can go to:
File > New Image > Image from Folder
And this way save your file to an external Hard Drive, then open them in another Mac and backup. This really saved me, hope it would help you two because it took me forever to find this solution
Share
Improve this answer
Follow
answered Jun 4, 2016 at 8:54
Shefy Gur-aryShefy Gur-ary
64888 silver badges1919 bronze badges
Add a comment
|
|
I run installation of OS on old Mac book Pro and the installation was not able to complete.
Is there a way to save my files, before reformatting the Mac?
|
How to backup files on mac when OS installation failed to finish?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.