Response
stringlengths 8
2k
| Instruction
stringlengths 18
2k
| Prompt
stringlengths 14
160
|
---|---|---|
0
As per as my knowledge and understanding , Microsoft has not provide any solution for this, If you will try to create any product catalogue list using stp that too you cant do.
Share
Follow
answered Mar 10, 2015 at 6:28
user4652640user4652640
1
Add a comment
|
|
I am trying to Migrate SharePoint Product Catalog List from to a Sub-site using List template backup/Restore and getting error
Something Went Wrong.
While checking the LOGS with the correlation ID
Application error when access /_layouts/15/new.aspx, Error=Value does not fall within the expected range. at Microsoft.SharePoint.Library.SPRequestInternalClass.CreateListFromFormPost(String bstrUrl, String& pbstrGuid, String& pbstrNextUrl) at Microsoft.SharePoint.Library.SPRequest.CreateListFromFormPost(String bstrUrl, String& pbstrGuid, String& pbstrNextUrl) at Microsoft.SharePoint.SPListCollection.CreateListFromRpc(NameValueCollection queryString, Uri& nextUrl) at Microsoft.SharePoint.ApplicationPages.NewListPage.BtnOk_Click(Object sender, EventArgs args) at System.Web.UI.WebControls.Button.RaisePostBackEvent(String eventArgument) at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) 883aeb9c-fdf6-301a-75c9-3a32ec634865
System.ArgumentException: Value does not fall within the expected range. at Microsoft.SharePoint.Library.SPRequestInternalClass.CreateListFromFormPost(String bstrUrl, String& pbstrGuid, String& pbstrNextUrl) at Microsoft.SharePoint.Library.SPRequest.CreateListFromFormPost(String bstrUrl, String& pbstrGuid, String& pbstrNextUrl) at Microsoft.SharePoint.SPListCollection.CreateListFromRpc(NameValueCollection queryString, Uri& nextUrl) at Microsoft.SharePoint.ApplicationPages.NewListPage.BtnOk_Click(Object sender, EventArgs args) at System.Web.UI.WebControls.Button.RaisePostBackEvent(String eventArgument) at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) 883aeb9c-fdf6-301a-75c9-3a32ec634865
Using Power shell Also getting the same error
|
Migrating SharePoint Product Catalog List at Subsite level
|
0
vicenet,
You should do this differently. A UBI filesystem is not like the other images (MLO, barebox, kernel) on your nand flash, or even a hard disk image. So it can't be just copied by using cp on the nand0.root.bb partition. That's the reason, why your new system doesn't boot correctly.
Unfortunately I am also looking for a solution to do this, but I only know that the other direction (copying from an ubifs image to flash) can't be done like this either.
I think you should first have the UBI tools (ubimkvol, ubiattach, ubiformat) inside your barebox. If you don't, maybe look for a barebox version for your system that does, and flash this (of course BEFORE making a backup of the old one). This was one of my issues previously. If you know, how to compile one exactly fitting for your system, then go into the menuconfig and you should find the proper ubi commands.
However, when having this commands, I think the way could be to attach the /dev/nand0.root.bb to the system as a new character device like this:
ubiattach /dev/nand0.root
UBI: attaching mtd0 to ubi0
...
...
This is at least the way it goes when you want to flash a new image to nand. It creates a /dev/ubi0 node. Unfortunately, I tried this and failed on a following mount command, just to test if it had been created correctly:
mount /dev/ubi0 /mnt/rootfs
mount: bad file number
So, if even mounting fails, I think it isn't the correct way either to create the image, but maybe the correct direction to it. Maybe someone other knows the complete solution?
Zoli
Share
Follow
answered Sep 4, 2015 at 12:55
user3711145user3711145
1
1
@Zoli : I have no idea of the solution since now I prefer work on the yocto build to integrate the application and any configuration directly in the rootfs image.
– vincenet
Sep 7, 2015 at 7:31
Add a comment
|
|
I have an embedded Linux environment working well and want save the rootfs part, then to flash others new board with empty nand.
So, what I tried. From barebox (before boot of the kernell), I put /dev/nand0.root.bb to the tftp server of my PC.
Then I tried rename it in rootfs.ubifs and at the good place to call the "update -t rootfs" script from an other (and a new one) board.
Size are different :
17301504 original rootfs.ubifs
264630784 /dev/nand0.root.bb
Problem is system not booting correctly.
Is there someone here who works in this way ? I need help...
|
How to get a backup of embedded Linux rootfs?
|
0
The best way is to get a time backup or a total backup.
Timebackups are more advanced then the default software, But therefor you have the option of restoring all the data.
A total backup is in case our NAS dies and have to reïnstall the OS.
Share
Follow
answered Sep 2, 2015 at 7:35
DoomLukDoomLuk
333 bronze badges
Add a comment
|
|
I have 2 NAS DSM Synology 5.1 system and I want to make daily Network BackUp of first DSM to second one.
I am using Backup&Recovery tool from DSM GUI (aready have created shared folders, granted permission etc.) and when I go to the part where I can choose Shared folders I do not have option to select all shared folders (I can only pick one by one) and that is the problem - because if I create a new share folder, that new folder is not included in the next BackUP.
I have found configuration file (synobackup.conf) and in that file is:
backup_folders=["/folder1", "/folder2", "/folder3", "/folder4", "/folder5"]
So my question is:
How can I include all shared folders (existing folders and new created) to scheduled BackUp.
Can I type any command in "backup_folders=["XXX"] to select all shared folders. I have tried with *, $ but it does not work.
Thank you!
|
Synology DSM Backup - configuration file
|
0
As you can read at http://developer.android.com/guide/topics/data/backup.html you can save any type of data, including files.
Also take a look at http://developer.android.com/reference/android/app/backup/FileBackupHelper.html
Share
Follow
answered Feb 13, 2015 at 14:32
EricEric
3633 bronze badges
Add a comment
|
|
is it possible to use
Android's Backup Manager
to backup a minimal file
(A text file with 20 words?)
I can't quite understand if this is possible and how to do it. ..
|
Android's Backup Manager to backup a minimal file
|
0
Configure/Enable database mail.
Then just create an operator for each job, everytime the job fail, the operator will send out an email.
This link has a detailed approach, on how to configure your alerts.
Share
Follow
answered Feb 5, 2015 at 18:55
ArunArun
95166 silver badges1313 bronze badges
1
Thank you. :) I wish to automate the procedure and I'm looking into SSIS and powershell options, alas, currently without any luck.
– Stiegler
Apr 18, 2015 at 8:20
Add a comment
|
|
I have to check multiple servers daily to find their backup folders (and maybe other data, like backup time, databases and backuptype).
Is there anyway to write a script to list the backup folder if I know the name of the Maintenance Plan?
I found this helpful, but it requires the backup to have been run at least once.
|
T-sql to find backup folder for specific maintenance plan
|
0
I recommend you to use vacuum() in order to shrink the db size, or use the auto_vacuum pragma.
If your db is something big, you can try to zip it.
Delete all unnecessary libs (compat_v7, for example, if you don't need it).
Try to compress images with optipng.
Try to convert your wav or mp3 to aac.
And... we are talking about how many Mb?
Share
Follow
answered Feb 5, 2015 at 11:28
apradosaprados
38444 silver badges1616 bronze badges
4
I can't see how deleting libs affects the database ? (and eventual unused libs would be stripped away from the release APK) An initial database may be 7-800kb (at the moment), but will probably grow to perhaps 2mb. As for the user added data (that is being backed up), we may talk about 1-200 kb. Vacuum would definitely be and option if I backed up the entire DB image, which I am not going to do.
– Oyvind
Feb 5, 2015 at 13:13
No, I was talking about reducing your apk size.
– aprados
Feb 5, 2015 at 13:26
Maybe you must think about changing your table / fields structure. Serialization will make grow your db, so... I think your db structure must be the most important point. Can we see your database structure and relations?
– aprados
Feb 5, 2015 at 13:34
APK size has nothing to do with this, and the structure of the database (which is as good as it can be) is not relevant... and serialization has absolutely nothing to do with the size of the database; serialization is about transporting/persisting objects. My question is about best practice for backing up selected sets of my data.
– Oyvind
Feb 5, 2015 at 14:12
Add a comment
|
|
I am about to implement backup for my Android app, and my issue is: The data resides in an sqlite db. Some of the data are just there for user convenience, and can be recreated from other sources. So in order to minimize the size, I wish to export relevant data only (the limit for backup using the Google API is 1mb).
All the data has class equivalents, which are populated via my SQLiteOpenHelper implementation. This means I can implement serialization.
So far I can see the following options:
Serialization using Java Serializable, and write all objects into a binary chunk and pass it to writeEntityData()
Serialization using XML or JSON, perhaps together with the zip API and dump the file as a binary chunk
Clone the database with relevant objects only. Probably a lot of work.
So far, using XML or JSON seems to be the best option, as I can reuse that for data sharing across users/devices. Java Serializable seems to bloat the size..
Would like to hear your opinions on this !
|
Backup Sqlite objects in Android
|
0
You can use AzCopy to bulk copy files from a local file system to an Azure blob container.
If you use the /XO option, that will only copy files newer than the last backup - in combination with the /Y option to suppress confirmation prompts, this is handy for running as a scheduled task to keep the blob backup current.
Share
Follow
answered Feb 4, 2015 at 17:31
Alex Warren♦Alex Warren
7,34722 gold badges3232 silver badges3030 bronze badges
Add a comment
|
|
We have a worker role that's using local storage directory to save files uploaded by the customer and we'd need to have a backup of those files.
Given that we already planned to change the worker role to the storage services available on Azure, is there a temporary solution that we could use immediately to backup those files?
Is there an automated way (even with 3rd party services) to backup a local storage directory or even the entire C disk?
|
Backup of local resources on Azure cloud services
|
0
Yes, in the normal python debugger, you can:
>>> l 1, 100000
<...contents of the file...>
Share
Follow
answered Jan 28, 2015 at 16:50
Jimmy SongJimmy Song
23333 silver badges77 bronze badges
1
>>> l 1, 100000 File "<stdin>", line 1 l 1, 100000 ^ SyntaxError: invalid syntax
– user3470313
Jan 28, 2015 at 17:05
Add a comment
|
|
Occasionally working with my python script I've closed non-saved script.py file and part of my script was lost. Taking into account that I still have loaded within the shell python where the full script has been loaded. IS it possible to restore full script?
Thanks for help,
Gleb
|
On the back-uping of python script
|
0
SQL Server Management Studio (SSMS) is not great at producing scripts, in particular it does not do a very good job of determining the order in which objects must be created in order to satisfy dependencies. As soon as you have one error the chance of further errors increases dramatically.
With regards to your expectation that "database could not have existed with invalid objects/relations or a procedure having syntax error in its definition" - this is not correct. There are a number of ways in which invalid objects can exist in a database.
Depending on how you created your script you might want to take a look at the Tools menu, Options, SQL Server Object Explorer, Scripting and review the settings there.
Rhys
Share
Follow
answered Jan 22, 2015 at 14:41
Rhys JonesRhys Jones
5,39811 gold badge2323 silver badges4444 bronze badges
Add a comment
|
|
I have a huge database. I got script of my database schema using sql server management studio.
Script contains 176,000 lines. When I copied script to new query window and executed it. It says
1. Incorrect syntax near 'GO' => This error repeats after 90% of error lines
2. Must declare the scalar variable "@abc"
3. The variable name '@sql' has already been declared. Variable names must
be unique within a query batch or stored procedure
4. Foreign key 'FK_FAVOURITES_DETAIL_FAVOURITES' references invalid table
'dbo.FAVOURITES'
5. Cannot find the object "dbo.LIC_INFO" because it does not exist or you
do not have permissions
According to my expectation database could not have existed with invalid objects/relations or a procedure having synntax error in its definition
Is management studio limited in capability to generate a particular length of script correctly or to run query batch of particular length or it can fail against particular script (e.g) dynamic sql in procedures or user defined datatypes
Or what could be something wrong with the process I followed?
|
Sql script generated by management studio gives errors when run as query
|
If you have linux/unix servers and have SSH access for both of them you can use rsync to sync modified files/directories from one server to another.
For example:
rsync -a ~/dir1 username@remote_host:destination_directory
See: http://www.tecmint.com/rsync-local-remote-file-synchronization-commands/
This can be make periodic with cron
|
I need to figure out how to easily mirror a client's site, but the mirror needs to sync if the client makes changes on the site.
an example is http://www.nailcotehall.co.uk mirrored here http://nailcotehall-px.rtrk.co.uk/index.html?utm_source=google&utm_medium=cpc&utm_campaign=NailcoteLandingPage
|
Easily Mirror Client sites
|
sqlcmd -U "username" -P "password" -i D:\Programs\FastGlacier\backupAllDB.sql
|
I am new in the SQL. Where I work I got a new type of job about automatize a Full SQL backup. Almost got it, it is working from the SQL Server Management Studio as a query. (by the way I have to do it on the Server) So I am trying to execute this query like this way "- sqlcmd -E -S EUTAX-WS\REMEK -i D:\Programs\FastGlacier\backupAllDB.sql" and its working my PC. But in the Server doesnt.
There is my batch file
sqlcmd -E -S EUTAX-WS\REMEK -i D:\Programs\FastGlacier\backupAllDB.sql
And there is my sql query
DECLARE @name VARCHAR(50) -- database name
DECLARE @path VARCHAR(256) -- path for backup files
DECLARE @fileName VARCHAR(256) -- filename for backup
DECLARE @fileDate VARCHAR(20) -- used for file name
-- please change the set @path = 'change to your backup location'. for example,
-- SET @path = 'C:\backup\'
-- or SET @path = 'O:\sqlbackup\' if you using remote drives
-- note that remotedrive setup is extra step you have to perform in sql server in order to backup your dbs to remote drive
-- you have to chnage you sql server accont to a network account and add that user to have full access to the network drive you are backing up to
SET @path = 'd:\Backup\Remek\'
SELECT @fileDate = CONVERT(VARCHAR(20),GETDATE(),112)
DECLARE db_cursor CURSOR FOR
SELECT name
FROM master.dbo.sysdatabases
WHERE name NOT IN ('master','model','msdb','tempdb')
OPEN db_cursor
FETCH NEXT FROM db_cursor INTO @name
WHILE @@FETCH_STATUS = 0
BEGIN
SET @fileName = @path + @name + '_' + @fileDate + '.BAK'
BACKUP DATABASE @name TO DISK = @fileName
FETCH NEXT FROM db_cursor INTO @name
END
CLOSE db_cursor
DEALLOCATE db_cursor
And when I am trying to execute it, i got the following error:
The server principal "" is not able to access the database "" under current security context.
Somehow I should add permission, but I can not how to do it!
|
Can not execute a Full SQL backup
|
0
Not sure about through the webservice, but I know you can access the state of backup jobs by running the bpdbjobs command and parsing through the output.
Share
Follow
answered Feb 2, 2015 at 20:03
awesomeamygawesomeamyg
5155 bronze badges
Add a comment
|
|
I work with SharePoint. I was given a project where I need to call NetBackup web services and download all the failed Backup jobs. Backup Status = failed or something like it.
All I know they (backup team) gave me a url http://netbk004/Operation/opscenter.home.landing.action? I have worked with asmx before but I have no clue how to consume exceptions from NetBackup. Is there an API that comes with NetBackup that I can use to populate a SharePoint list? Or web services, it doesn't matter as long as I can download the exceptions to a SharePoint List.
|
Web Services - How to get failed backup jobs from NetBackup
|
You can try to use this UPDATE trigger to update the rows in tblMainbackup while update row in tblMain table
CREATE Trigger [dbo].[MainUpdateBackup]
On [dbo].[tblMain] AFTER UPDATE AS
BEGIN
UPDATE tblMainbackup
SET tblMainbackup.col1 = i1.col1,
tblMainbackup.col2 = i1.col2,
tblMainbackup.col3 = i1.col3
FROM tblMainbackup t1
INNER JOIN inserted i1
ON t1.Id = i1.ID
END
|
So I have an intermitant problem that I need a temporary fix for.
(something is causing issues with two tables intermittently (every 3-4 weeks))
One table gets completly cleared out and another seems to have individual records removed...
as this is an ongoing issue we are still investigating the main cause but need a temporary fix.
As such I have setup a second "Backup" table for each of these tables and am trying to setup triggers to copy ALL Insert and Update commands
I have created the Insert triggers with no problem.
ALTER Trigger [dbo].[MainInsertBackup]
On [dbo].[tblMain] AFTER INSERT AS
BEGIN
INSERT INTO tblMainBackup
SELECT * FROM inserted
END
now I need a trigger that does the same for updates (performs the updates on the tblMainBackup).
this way I should ensure that the backup table has all of the information from the main but if a record is removed somehow it can be copied back to the main table from the backup table.
Only examples I have found perform updates in individual fields on the update, or insert a whole new record on the update into a log table.
any help is apriciated.
|
SQL Triggers to duplicate update changes to another table
|
0
You tagged this question with google-dfp, if you are using DFP then then solving this problem is trivial. Just setup a line-item for AdX for the required ad units and set a floor-price in AdX. Setup another line-item which fills all the impressions that AdX didn't take an put the random images in that line item.
Another way to solve this (without using an ad server) is setting up a backup HTML page instead of a backup image. This backup HTML page can be dynamically generated to output the random image that your client wants to show. You can also use this HTML page to run some JavaScript to collapse the ad-unit and trigger something on the parent page.
Share
Follow
answered Jan 9, 2015 at 9:17
BartVBBartVB
18811 silver badge55 bronze badges
2
In terms of using a HTML page, using JavaScript how can you identify which of the adverts from the multiple on the page is the ad slot that has not returned any adverts, i.e. which iFrame to delete or hide from the DOM
– usermomo
Jan 9, 2015 at 10:31
When using DFP you can ask DFP to collapse the divs that are not filled with a creative. DFP also has a callback which enables you to replace the collapsed divs with a random image (or do whatever you want). When using a pure AdX solution the backup HTML page can collapse the div/iframe itself. The iframes that are used by Google are Friendly iframes which enable JS in the iframe to access the parent page.
– BartVB
Jan 11, 2015 at 12:54
Add a comment
|
|
We are using Google DoubleClick Ad exchange (https://www.google.com/adx/) for targeted adverts for a customer, What I am after is a way to find out or detect that no targeted adverts were available for display on a page with multiple adverts. I have setup backup adverts (https://support.google.com/adxseller/answer/1262285?hl=en) but because they are within iFrames on another domain I cannot Access the contents of the iFrame to determine the image (the backup image) that is returned. The goal is to allow our customer to be able to on the client-side to know if no targeted advert was available and replace it with an alternative random image of theirs? Is their anything in the Google Ad exchange API or response that can be used on the client-side to identify this response?
Thanks in advance.
Mo
|
Google Ad exchange and detecting no targeted Adverts
|
As per your requirements, Oracle data guard is the best solution. Oracle goldengate uses replication concept. Oracle data guard is purely for high availability. There are various protection modes exist in data guard. You can set protection mode for minimum data loss. During Active data guard, standby database (here on backup server as per your detail) is also available for querying and executing read only operations like generating reports. This feature is used for decreasing load on production (here primary server). During this stage, your standby database (backup server) is open in read only mode and also accepting changes (redo) from primary database. It means, it is still updating in background and syncing. There is very minimum chance of data loss and minimum downtime during this stage. Using dataguard, you can set automated switchover task too.
In older versions of Oracle (prior to 11g), if we open standby database in read only mode then it does not accept changes from primary database. If primary database will crash in this situation then we need to apply all changes to standby database manually and wait for data synchronization after that we can switch.
You need to study your technical requirements, consider your IT budget for using these features because Oracle dataguard is license product.
|
I am a newbie to database administration. I am trying to get things sorted out but as I study more and more about oracle database backup I get more messed up, so I've decided to ask here. Please accept my apologize if I say something ridiculous :p.
Here is my "simple" situation 1:
Assuming I have 2 server rack, one is my Primary Server, another is my Backup Server (Both server sitting in the same site).(Using Oracle 11g), When the Primary Database broke down, the primary database service will point to backup database. Therefore, the backup database must always be updated from primary database, like a mirror. So my questions are:
What backup method suits this situation? Oracle Dataguard? Oracle Stream? Oracle Goldengate?
Can Oracle Active Dataguard achieve this approach?
If Oracle Active Dataguard can achieve this, the redo-log will only be applied when there is a switchover? So if the primary database broke down and the redo-log only starts to apply into the backup database, I'll have some downtime before my production can resume? This production requires 0 downtime.
Please feel free to comment on the database architecture base on the following requirements and feel free to change it if it is not correct.
Requirements:
No downtime. The site is running 24/7.
Auto switchover to backup database without human interaction.
Able to notify administrator after switchover (If the switchover is completely transparent, no one will realize something went wrong with the primary database right?)
Thank you so much.
P/s: Sorry for my horrible english.
|
Oracle database real-time backup + auto switchover
|
0
JetBrains support got back to me with the right answer - I should use a POST method, not GET, even if the request body is empty.
Here is an example of a working request:
curl -u user:password --request POST http://localhost:8111/httpAuth/app/rest/server/backup?includeConfigs=true'&'includeDatabase=true'&'fileName=testBackup
And the response to that contains a plain file name in text: testBackup_20150108_141924.zip
Share
Follow
answered Jan 26, 2015 at 21:50
Capt. CrunchCapt. Crunch
4,64866 gold badges3434 silver badges4141 bronze badges
Add a comment
|
|
I found plenty of information and example about triggering TeamCity 8.1.2 backups via the REST API.
But leaving the backup files on the same server is pretty useless for disaster recovery.
So I'm looking for a way to copy over the generated backup file to another location.
My question is about finding the name of the latest available backup file via the REST API -
The Web GUI includes this information under "Last Backup Report" under the "Backup" page of the Server Administration.
I've dug through https://confluence.jetbrains.com/display/TCD8/REST+API#RESTAPI-DataBackup and the /httpAuth/app/rest/application.wadl on my server. I didn't find any mention of a way to get this info through the REST API.
I also managed to trigger a backup with a hope that perhaps the response gives this information, but it's not there - the response body is empty and the headers don't include this info.
Right now I intend to fetch the HTML page and extract this information from there, but this feels very hackish and fragile (the structure of the web page could change any time).
Is there a recommended way to get this information automatically?
Thanks.
|
Finding latest TeamCity Backup via REST API
|
What about LocalSettings? You may try to store your files there.
public void Save(string key, string value)
{
ApplicationData.Current.LocalSettings.Values[key] = value;
}
public string Load(string key)
{
return ApplicationData.Current.LocalSettings.Values[key] as string;
}
|
I want to add multiple small StorageFiles (5-60 KB each) into one big container (ca. 1-3 MB with all files), that I can afterwards upload to OneDrive in Windows Phone 8.1.
What is a good and clean way to do that? Compression and/or Encryption is optional.
|
C# Windows Phone 8.1 create file container
|
0
Since this is a sql file (text file). Split this file to for example 10 parts and execute them.
Share
Follow
answered Dec 24, 2014 at 9:54
Atilla OzgurAtilla Ozgur
14.5k33 gold badges5050 silver badges7171 bronze badges
Add a comment
|
|
I have a .sql file size about 4GB. I want to restore this database. However 'max_allowed_packet' option is only 1GB. So, how to restore this database? Someone help me please.
Thanks
|
How to backup and restore large .sql file about 4GB in MariaDB 10.0?
|
0
you need to run PerformPostRestoreFixup after you restore the database and before you sync it.
the scopes has a replicaid that identifies each replica. if you restore a db, you will have two scopes having the same id which will result to data not being synched.
to illustrate, let's say T1 was synched to A, and A records that the last time it synched with T1 has a timestamp of 1000.
you then restore A to an older version which of course has lesser timestamp values. the you update the data on the restored db, but the timestamp only goes up to let's say 700.
when you sync it to T1, T1 says give me the changes that has timestamp greater than 1000 (the timestamp it recorded on last sync). so no changes is detected.
Sync FX does incremental sync, e.g., what has changed since last sync. it doesn't go comparing row by row to check the differences between tables.
by running PerformPostRestoreFixup, you're giving the restored copy a new replicaid. The old replicaid is still tracked in T1, so it knows it has already synched the existing data in the restored db. however, new changes will be reflected in the new replicaid, so when it syncs, T1 doesnt know about it (it doesnt have any timestamp record on when it last synched with the new replicaid) and thus will be able to detect changes to sync.
P.S., this is a simplified illustration of your scenario, there's more to that that happens under the hood.
Share
Follow
answered Dec 13, 2014 at 3:02
JuneTJuneT
7,84022 gold badges1515 silver badges1414 bronze badges
Add a comment
|
|
I am syncing SQL Server databases using Microsoft Sync Framework.
My databases are frequently restored to earlier versions, and I need to keep the father (destination of sync process), updated.
Now, the thing is that I have a child A, with a table T1, and a father B with a table T1.
Both T1 tables have a table that "record" the operations, called T1_tracking. First, I sync T1, from A to B. Then, I restore the database in A to a earlier version, and generate again the data stored in T1 (with different information). Consequently T1_tracking in A are totally diferent of T1_tracking in B, and the Sync Framework tell me that it not have nothing to do.
Any solution? Please... Thanks!!...
|
sync framework on backup/restore scenarios
|
Take a look into mysql binary logs. Basically two steps have to be done
create mysqldump
copy binarylog that has all changes after the last mysqldump
mysqldump provides you a full backup. The binary log all the changes that happened. That is the closest that comes to "incremental" backups in mysql.
See http://dev.mysql.com/doc/refman/5.5/en/binary-log.html for more details. Be aware that you want --master-data flag for the mysqldump, else you don't know where to start in the binary log.
|
here the given below code it's correctly worked for backup but i need to change incremental backup. i want to take backup every 4 hour. how to set time schedular in shell script ?
#!/bin/bash
TIMESTAMP=$(date +"%F")
BACKUP_DIR="/home/admin/$TIMESTAMP"
MYSQL_USER="test"
MYSQL=/usr/bin/mysql
MYSQL_PASSWORD="******"
MYSQLDUMP=/usr/bin/mysqldump
mkdir -p "$BACKUP_DIR/mysql"
databases=`$MYSQL --user=$MYSQL_USER -p$MYSQL_PASSWORD -e "SHOW DATABASES;" | grep -Ev "(Database|information_schema|performance_schema)"`
for db in $databases; do
$MYSQLDUMP --force --opt --user=$MYSQL_USER -p$MYSQL_PASSWORD --databases $db | gzip > "$BACKUP_DIR/mysql/$db.gz"
done
|
how to get incremental backup for mysql using shellscript
|
0
you did not specify the platform, but if i assume its *nix, then below solution might work for you
Auto backup : Using Crontab
sample command :
crontab -u username -e
* * * * * /usr/bin/mysqldump -u USER -pPASSWORD DB | gzip > FILENAME
Thanks
Share
Follow
answered Dec 7, 2014 at 8:04
JafferJaffer
2,9581717 silver badges2929 bronze badges
0
Add a comment
|
|
I take backup my database using
mysqldump -u nollyvenon -p caretime> caretime.
sql && cipher /e /a caretime.sql
but I want to autobackup the database in an encrypted zip file
I want to run as in scheduler in windows server
|
Auto backup encrypted in zip file
|
The query is not creating the folder if its not existing.
We should create a folder manually instead.
Since we are using VB.Net we had to create a folder with the following code before backup:
My.Computer.FileSystem.CreateDirectory("D:\Profit\Data\")
|
When we are trying to backup our database we get an error.
Front End : VB.Net
Back End : SQL Server
DB Name : PROFITSTORAGE
Backup Location : 'D:\Profit\Data\ProfitStorage.Bak'
Code:
Dim con As New SqlConnection
Dim query As SqlCommand
Try
con.ConnectionString = "Server=(LocalHost);Data Source=LocalHost\SQLEXPRESS;Integrated Security=SSPI"
con.Open()
query = con.CreateCommand
query.CommandText = "BACKUP DATABASE PROFITSTORAGE TO DISK='D:\Profit\Data\ProfitStorage.bak' WITH INIT"
query.ExecuteNonQuery()
query.Dispose()
con.Close()
Catch ex As Exception
MsgBox(ex.Message, MsgBoxStyle.Exclamation, "Backup Failed")
End Try
Query used :
BACKUP DATABASE PROFITSTORAGE
TO DISK='D:\Profit\Data\ProfitStorage.bak' WITH INIT
Error Message :
Cannot open backup device 'D:\Profit\Data\ProfitStorage.bak'. Operating system error 3 (failed to retrieve text for this error. Reason: 15105).
BACKUP DATABASE is terminating abnormally.
How to sort out this issue?
|
SQL Server : Backup Error
|
0
Create a simple batch file that can be setup as a scheduled task to copy the completed backup to another location.
Share
Follow
answered Dec 16, 2014 at 23:44
user4317867user4317867
2,41544 gold badges3232 silver badges5858 bronze badges
Add a comment
|
|
I'm using Windows 2008R2 configured with windows backup using a dedicated hard disk: http://grab.by/CPhm and http://grab.by/CPhq .
Can I add a second destination (a shared folder) to have two backups every night? If I try I have this alert http://grab.by/CPhC and so it seems I don't have any chance.
Thank you very much
|
Windows server backup: adding more destination?
|
I had a problem like this with some data loading scripts. The scripts were in the form:
insert into table(a,b,c) values((a0,b0,c0),(a1,b1,c1),...(a50000,b50000,c50000));
and contained from 5 to several dozen of these hyper-long statements. This format wasn't recognizable by the system I wanted to import the data into. That needed the form:
insert into table(a,b,c) values(a0,b0,c0);
insert into table(a,b,c) values(a1,b1,c1);
...
insert into table(a,b,c) values(a50000,b50000,c50000);
Even the smaller scripts were several MB and took up to an hour to load into a text editor. So making these changes in a standard text editor was out of the question. I wrote a quick little Java app that read in the first format and created a text file consisting of the second format. Even the largest scripts took less than 20 seconds total. Yes, that's seconds not minutes. That's even when a lot of the data was quoted text so I had to make the parser quote-aware.
You can write your own app (Java, C#, Perl, whatever) to do something similar. Write to a separate script file only those lines that pertain to the database you want. It will take a few hours or days to write and test the app, but it has probably taken you more than that just to research text editors that work with very large files -- just to find out they don't really.
|
How do I import a single database from a backup file that has multiple databases on it?
The main issue is that the file is 921mb so I can't successfully open it in notepad or any of the IDE's I have. If I could do that, I would get the SQL that I need and just manually copy it into phpMyAdmin.
It would be good if I could just import it straight from the backup file.
I guess it would be something like this but I can't get it to work
mysql -u root -p --database1 database1 < backup.sql
Can anybody help? Thanks
|
How do I import a single database from a .sql file that contains multiple databases
|
0
There can be multiple reasons:
First, upload some simple php file containing only php info function (http://php.net/manual/en/function.phpinfo.php), call it and see is everything ok there with the server. I.e. do you have PHP enabled at all. Don't forget to delete that file when it starts working.
If you get php info output that means that php is working. So next problem could be that because of different server settings you are getting php error, but errors are not displayed.
Try finding in your control panel where php settings are and turn off on page error reports.
If you can't find it in control panel, try enabling error reporting from the code: add
error_reporting(E_ALL);
as first function in index.php file in site root.
Check how much memory PHP has available. Upload some php info file to working site and compare difference in their output.
Share
Follow
answered Nov 29, 2014 at 9:34
MilanGMilanG
7,05422 gold badges3636 silver badges6565 bronze badges
Add a comment
|
|
I'm trying to create a full backup of cyclekids.org. I've backed up all the files, dumped the database, and restored it on another machine (beta.cyclekids.org). However, Drupal doesn't seem to be rendering any page content on the backed-up site. Even pages that 404 on the regular site display the same mostly-blank template with a smattering of content.
What are likely culprits for this (e.g. misdirected theme file or broken config)?
|
Drupal not rendering page content
|
You should use:
CREATE TABLE copy_cust
AS SELECT *
FROM Customers
|
I need to create a backup copy of a table and all of the data in it.
This is what I have tried so far:
SELECT *
INTO copy_cust
FROM Customers;
Any help would be greatly appreciated.
|
Mysql create copy of table and all of the data
|
0
which version do you use? And what's your platform? Try this:
mongodump --db mydatabase --collection records --query "{ 'embedded_document.field_1': { '$ne' :
'Zebra' }}" -vvvv
Share
Follow
answered Nov 10, 2014 at 8:11
KeviswangKeviswang
2311 silver badge66 bronze badges
0
Add a comment
|
|
I'm trying to do a mongodump with query. Below is my syntax.
mongodump --db mydatabase --collection records --query '{ "embedded_document.field_1" : { "$ne" : "Zebra" }}' -vvvv
What I'm trying to do is dump all records with embedded_document.field_1 that is not equal to Zebra.
I have 100 records with Zebra in it, but the count of the records found is equal to all records count (5000).
Collection File Writing Progress: 200/5000 0% (objects)
The query works in mongo shell and it returns the correct count (100).
db.records.find({ "embedded_document.field_1" : { "$ne" : "Zebra" }}).count();
Any ideas?
|
How to use query in mongodump on embedded documents
|
0
Check if you can use xp_cmdshell. If yes backup database and copy bak file
USE master
EXEC xp_cmdshell 'copy c:\sqlbackup\DB.bak \server2\backups\',
NO_OUTPUT
Share
Follow
answered Nov 4, 2014 at 13:25
s_fs_f
85466 silver badges44 bronze badges
Add a comment
|
|
I have a database on server which i don't have RDP access. I need to create a backup of data, stored procedure and functions of the database on the server. I tried "generate script" but it fails saying there is some "Transport layer error" occurred.
Is there any way i can generate script of the database on server using command line or any other tool.
Thanks
|
Generate MSSQL database script using command line
|
0
To take backup of single table you can use dump command as follows:
Dump and restore a single table from .sql
Dump
mysqldump db_name table_name > table_name.sql
Dumping from a remote database
mysqldump -u<db_username> -h<db_host> -p<db_password> db_name table_name > table_name.sql
Restore
mysql -u username -p db_name < /path/to/table_name.sql
Share
Follow
edited Nov 3, 2014 at 18:17
answered Nov 3, 2014 at 18:12
Tushar BhawareTushar Bhaware
2,52511 gold badge1717 silver badges2929 bronze badges
2
is their not an sql command i can use to make a backup of table and contents ?
– Jmac88
Nov 3, 2014 at 18:25
@Jmac88, It depends upon what version of mysql you are using. If you are using version earlier than 5.5 then you can use BACKUP TABLE command which is deprecated in since MySql 5.5. There are only two options only since then mysqldump and mysqlhotcopy. Most popular and recommended is Mysqldump.
– Tushar Bhaware
Nov 3, 2014 at 18:34
Add a comment
|
|
I'm having problems creating a backup copy of one of my tables (I wish to back up all the contents as well).
I tried this
SELECT *
INTO Copy_customers
FROM Customers;
Any help would be greatly appreciated,
|
How to create backup copy of a table including table contents
|
0
A couple of possibilities... first that your programs/commands cannot be found when run from cron, and second that your database cannot be found when run from cron.
So, first the programs. You are using date and mysqldump, so at youir Terminal prompt you need to find where they are located, like this:
which date
which mysqldump
Then you can either put the full paths that you get as output above into your script, or add a PATH= statement at the second line that incorporates both paths.
Secondly, your database. Where is it located? If it is in /home/mohan/sohan/ for example, you will need to change your script like this:
#!/bin/bash
name=`/bin/date +%Y%m%d`.sql
cd /home/mohan/sohan
/usr/local/bin/mysqldump -u abc --password=abc my_db > $name
Share
Follow
answered Oct 31, 2014 at 15:10
Mark SetchellMark Setchell
199k3232 gold badges289289 silver badges452452 bronze badges
0
Add a comment
|
|
My shellscript for taking backup of databse works fine normally.
But when i try to run through crontab there is no backup.
this is mycrontab
* * * * * /home/mohan/sohan/backuptest.sh
content of backuptest.sh are
#!/bin/bash
name=`date +%Y%m%d`.sql
#echo $name
mysqldump -u abc --password=abc my_db > $name
backup.sh works fine when normally run .But fails to generate backup when run through crontab
|
backup of database using shellscript using crontab fails?
|
I would recommend looking at the names of the files as well as potentially renaming the db just to rule out more options. If it continues occurring that should at least narrow down one more thing.
|
This question already has answers here:
database restore failing with move
(7 answers)
Closed 9 years ago.
I want to restore a backup from another server to my personal computer, unfortunately i can't
restore the backup. what should i do?
error message:
TITLE: Microsoft SQL Server Management Studio
Restore failed for Server 'MOHI-PC'. (Microsoft.SqlServer.SmoExtended)
For help, click here.
ADDITIONAL INFORMATION:
System.Data.SqlClient.SqlError: File 'C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\sgdb.mdf' is claimed by 'SGinv_Data'(12) and 'SgDb_dat'(1). The WITH MOVE clause can be used to relocate one or more files. (Microsoft.SqlServer.Smo)
For help, click here.
BUTTONS:
OK
|
Restoring database back up [duplicate]
|
I'm posting this hoping this will help someone, as there is really lack of resources around Google's documentation and the web in general about this.
While the appengine documentation says this can be done, I actually found the piece of code that forbids this inside the data_storeadmin app.
I managed to connect through python remote-api shell, read an entity from the backup and tried saving to the datastore, but datastore.Put(entity) operation yielded: "BadRequestError: app s~app_a cannot access app s~app_b's data" so it seems to be on an even lower level.
In the end, I decided to restore only a specific namespace to the same app which was also a tedious task - but it did save the day.
I Managed to pull my backup locally through gsutil, install a python-remote-api version on my app, accessed the interactive shell and wrote this script:
https://gist.github.com/Shuky/ed8728f8eb6187475b9a
Hope this helps.
Shuky
|
I can't seem to restore my AppEngine backups to a new app as listed in the documentation.
We are using the cron backup as listed in the documentation.
I get through all the stages to launch the restore job successfully, but when it kicks of all the shards are failing with 503 errors.
I tried this with multiple backup files and the experience is the same.
any advice?
(Java runtime)
|
AppEngine Backup from one app to another
|
0
did the IT guy delete the old local Username ? if not Have you tried logging in as the "old_username"? not using the domain but using computername\old_local_username .
Share
Follow
answered Oct 24, 2014 at 2:33
AAFFAAFF
8911 silver badge88 bronze badges
Add a comment
|
|
I need start by explaining the scenario and then I'll ask the questions.
Sunday evening all redirected folders are copied and placed on a NAS to be accessed Monday morning to restore user files.
Monday morning every employee shows up to work to find that there is a new server cluster running a new domain that every computer needs to be joined to. The old setup was using redirected folders for desktop and my documents to an SBS2011 server. The new setup is not using redirected folders. As IT staff and contractors race around to get everyone added to the new domain Monday morning, someone creates and saves a file on their desktop before being added to the new domain. The IT person (who shall not be named) does not backup the user's files and moves them to the new domain.
Here is where the problem is. The IT person copies down the user files from the NAS and thinks the job is done. However, the user cannot find that one file he/she created that morning. Going to c:\users\old_username doesn't show desktop or my documents because they were redirected. But since the computer could not reach the SBS2011 server, the file never got redirected either. SO, where is it?!
|
Redirected folders: Missing files during a domain change. Where are they?
|
This typically means that your hosting plan does not include the "backup" feature in the database section.
|
I can't create a backup of a database (MS SQL 2005), using WebsitePanel. When I select the database and try to click on the "Backup" button from "Maintenance Tools", the "Backup" button is disabled.
|
Backup a database in WebSitePanel
|
0
You must create a replication of your MySQL server (master-slave) and:
You will have a runtime copy of your current db on slave instance of MySQL.
You can stop slave instance of mysql, run backup process on slave and start replication again, it no affects the master instance and of course your site will no be affected.
The master-slave replication can be done in the same machine using a different disk or in another machine (recommended).
You can automate the process creating a .bat scripts to stop/run slave instances and backup process, and then add this .bat file to your windows task.
Share
Follow
answered Sep 27, 2014 at 12:42
Ivan CachicatariIvan Cachicatari
4,25222 gold badges2121 silver badges4141 bronze badges
1
Hi, thanks for your suggestion - I guess I have to read a little about replication ass I do not yet know how to go about it...but thanks for pointing me to this idea!
– user1104980
Sep 28, 2014 at 8:35
Add a comment
|
|
I have 2 databases with MyISAM tables running on a Windows 2008 Server. These databases are about 20GB in size with a few tables with millions of rows. My question regards backing them up on a weekly basis. Currently I do some updates once a week, and then I go to the data folder and copy the physical folders representing the databases to another drive on the server, and then rar everything up.
This process takes about 45 minutes and during that time certain functionality of my website cannot be used as during the copying, the tables get blocked. I have seen that you can LOCK and FLUSH tables so that they can still be used while they are being copied. So does LOCKing the tables allow concurrent SELECTs?
I do not know exactly how to go about this and I would greatly appreciate if anyone could help me with how I could synchronize the lock/flush statements with the copying of the physical data and then the subsequent unlocking, and how I could possibly automate (possibly script to a dos batch file) this process?
Thanks in advance,
Tim
|
MySQL backing up MyISAM tables by copying the physical files - and automating the process
|
0
To keep the latest 14 .zip files in a folder you can use this: remove the echo to activate the delete command.
It sorts the ZIP files by date, always skips the 14 most recent files, and deletes the rest.
Ensure that the folder shown already exists.
@echo off
cd /d "C:\Laptop Backups"
for /f "skip=14 delims=" %%a in ('dir *.zip /b /o-d /a-d') do echo del "%%a"
pause
Share
Follow
answered Sep 27, 2014 at 15:56
foxidrivefoxidrive
40.7k1010 gold badges5656 silver badges6969 bronze badges
Add a comment
|
|
I want to then have "Data Files RC" in "C:\Laptop Backups" zipped with date on end of the file
so there will be:
C:\Laptop Backups\
Data Files RC 9_22_2014.zip
Data Files RC 9_23_2014.zip
Data Files RC 9_24_2014.zip
Data Files RC 9_25_2014.zip
Then I want to look at the earliest date in the "C:\Laptop Backups" directory based on the created date,
not the date added to the file name, and delete any zipped files older than 14 days. So in the
example above, I want to get the created date for "Data Files RC 9_25_2014.zip" then count back 14 days
and delete all zipped files older than 14 days.
I want to use the earliest file date, because if I am on vacation or just not paying attention
and the system fails, then the delete will not continue deleting files based on todays date.
If I do not catch this then it would eventually delete all backups.
So if something fails but the delete is still working it would only delete from 14 days back from
the point of failure, i.e. so say after 9-26-2014 the copy option is not working, but the
delete is kicking in, then only 14 days back from 9-25-2014 would be deleted, then also on 9-27-2014 it would
still only delete from 9-25-2014 14 days back, instead of using 9-27-2014. This then would always have 14 days worth
of backups from 9-25-2014 back.
Maybe this is not an issue, but I do have a routine that deletes based on todays date, then I have a separate
backup software, which I just found was failing, so it occured to me that the delete batch routine that I put in Windows
scheduler would have eventually deleted all my backups.
So what is the best way to avoid this, can robocopy be set to look at the earliest file date, then delete older files
based on that or is there a some other script or combination thereof?
|
robocopy based on earliest archived date?
|
0
Notepad is horrible at big files. Use notepad++.
Or write a small c# app to open the file, read the bytes and somewhere in the middle, write some random bytes.
Similar to this:
Open existing file, append a single line
Share
Follow
edited May 23, 2017 at 11:49
CommunityBot
111 silver badge
answered Sep 15, 2014 at 17:11
granadaCodergranadaCoder
27k1111 gold badges115115 silver badges157157 bronze badges
Add a comment
|
|
I want to corrupt sql server database backup in order to test some thing on sql server.
I open backup file in notepad and clean some data from it.
Does anyone have better solution?
Some time backup file is huge and I can't open it in notepad.
|
how to corrupt sql server backup database
|
0
If you want to "sync" between 2 MySQL servers not using replication, you can use the Percona Toolkit and you use the tool called "pt-table-sync". See here :
http://www.percona.com/doc/percona-toolkit/2.2/pt-table-sync.html
Share
Follow
answered Oct 26, 2014 at 15:34
MYoussefMYoussef
4677 bronze badges
Add a comment
|
|
I am trying to Sync Local and Remote MySQL DB. I have Completed Remote Side Work and need an idea on how to Export MySQL DB locally whenever Database get Change. Any idea or existing Technique.
|
MySQL Auto Export as .SQL Whenever Data Changed
|
0
Rsync is a good backup tool, but if you want a complete dump of everything that can only be restored to the same drive, look at the good old dd. It can dump the whole drive to a file that you can later use to restore.
Share
Follow
answered Sep 5, 2014 at 19:25
kaqqaokaqqao
13.8k1111 gold badges6969 silver badges121121 bronze badges
1
Thanks, any chance I can get the actual commands I need to type in? Including the excludes, so I don't get an infinite loop. I'm new to linux and I cannot make a mistake backing up this drive. Thanks,
– Mich
Sep 5, 2014 at 19:28
Add a comment
|
|
Good afternoon,
I have a task of backing up an entire drive ~20gB of a Fedora Installation (don't know the exact release). I would prefer to back this up into an image on an External Hard drive, so if the system fails, I will be able to easily restore it onto an identical drive. The drive the system is on is not a hard drive, it is I believe a CF Card. But it may actually be a small hard drive.
So, to my understanding, in order to restore it, I would need to use another linux computer to flash the CF card using the image.
I have no previous experience backing up files in Linux, so in order for me to use any of your help, I would like to request that the answers have the exact commands I will need to do this backup and restore.
It is also imperative that the original installation remains intact and does not get damaged by this backup process.
Thank you,
Your help is appreciated,
-D
|
How to back up Fedora Linux and restore or load image in VMWare
|
I tried to look at the permission for the SQL database, it seems to be ok. Tried to move it back to the original server. Seems that something went in the transaction to the endpoint.
The SQL.bak. Tried to do it again got the same result. So something is going on when im transfer it to a shared drive and grabbing it from there :(
So the answer is that the header file is broken :(
|
I lately changed my virtual machine from a Virtual Box to Hyper-V because of better performance on Hyper-V. After I did this I cannot restore a database (2008R2 all environments, same version) from outside a test or production environment, and I could before. I request this error in my SQL log:
backupiorequest::reportioerror read failure on backup device. Operating system error 13 (failed to retrieve text for this error. Reason 15105)
I tried ofcause google, which tells me that Operating System error 13 is somekind of "permission failure". I tried to give the backup file full control for everyone, but seems not to make any difference.
I think it is a permission problem, im just stuck and dont know how to solve the problem, any suggestions?
I changed my virtual machine name from one thing to another, could it be the problem so my "rights" is right now assigned to oldName and not newName and if yes, where do I need to change those?
|
SQL backup "Operating System error 13", system.Io.error
|
0
I believe you would just add another line to the crontab and put the script you want to run and the specific date and time. Also here is a link for cron jobs in Unbuntu not sure what flavor your running but i know it works in Debian 4.6 (Squeeze)
Share
Follow
answered Aug 28, 2014 at 17:58
EgregoryEgregory
21822 silver badges99 bronze badges
Add a comment
|
|
I have a cronjob that runs every saturday at 4am like:
0 4 * * 6 /var/lib/backup_weekly.sh >> /var/log/backup_weekly.log 2>&1
Is there a way to run a different script (backup_monthly.sh) at 4am the first saturday of every month? without running the script above (backup_weekly.sh)?
|
Cron entry to run first saturday of every month
|
0
Why reinvent the wheel? You can just use Debian's automysqlbackup package (should be available on Ubuntu as well).
As for cleaning old files the following command might be of help:
find /mysql -type f -mtime +16 -delete
Uploading to remote server can be done using scp(1) command;
To avoid password prompt read about SSH public key authentication
Share
Follow
answered Aug 28, 2014 at 0:52
OnlyjobOnlyjob
5,77822 gold badges3535 silver badges3535 bronze badges
Add a comment
|
|
After months of trying to get this to happen I found a shell script that will get the job done.
Heres the code I'm working with
#!/bin/bash
### MySQL Server Login Info ###
MUSER="root"
MPASS="MYSQL-ROOT-PASSWORD"
MHOST="localhost"
MYSQL="$(which mysql)"
MYSQLDUMP="$(which mysqldump)"
BAK="/backup/mysql"
GZIP="$(which gzip)"
### FTP SERVER Login info ###
FTPU="FTP-SERVER-USER-NAME"
FTPP="FTP-SERVER-PASSWORD"
FTPS="FTP-SERVER-IP-ADDRESS"
NOW=$(date +"%d-%m-%Y")
### See comments below ###
### [ ! -d $BAK ] && mkdir -p $BAK || /bin/rm -f $BAK/* ###
[ ! -d "$BAK" ] && mkdir -p "$BAK"
DBS="$($MYSQL -u $MUSER -h $MHOST -p$MPASS -Bse 'show databases')"
for db in $DBS
do
FILE=$BAK/$db.$NOW-$(date +"%T").gz
$MYSQLDUMP -u $MUSER -h $MHOST -p$MPASS $db | $GZIP -9 > $FILE
done
lftp -u $FTPU,$FTPP -e "mkdir /mysql/$NOW;cd /mysql/$NOW; mput /backup/mysql/*; quit" $FTPS
Everything is running great, however there are a few things I'd like to fix but am clueless when it comes to shell scripts. I'm not asking anyone to write it. Just some pointers. First of all the /backup/mysql directory on my server stacks the files everytime it backs up. Not to big of a deal. But after a year of nightly backups it might get a little full. So id like it to clear that directory after uploading. Also I don't want to overload my hosting service with files so id like it to clear the remote servers dir before uploading. Lastly I would like it to upload to a subdirectory on the remote server such as /mysql
|
Mysql Auto Backup on ubuntu server
|
0
Someone from oscommerce forum just helped me out:
This solved the problem:
in the admin/includes/configure.php
write this define:
define('DIR_FS_ADMIN', '/home/"servernamehere"/public_html/admin/');
Share
Follow
answered Aug 26, 2014 at 16:28
RaphaelRaphael
6711 silver badge66 bronze badges
Add a comment
|
|
I have a problem in OScommerce database backup manager directory folder.
I tried 777, 755, 775 but still the same error, it says Error: Backup directory does not exist. , please create it and/or set location in configure.php.
The folder was there and also in the config file:
define('DIR_FS_BACKUP', DIR_FS_ADMIN . 'db_backup/');
Any possible ideas why it happened everytime?
|
OSCommerce rror: Backup directory does not exist. , please create it and/or set location in configure.php
|
This will use xcopy /l to retrieve the list of files in A that differ from B. Then each file is copied to the adecuated folder in C
The initial pushd will allow to use the xcopy command to return relative paths in the form
.\file.ext
.\folder\file.ext
The initial dot is removed by the delims=. and the second for concatenates the C folder path and the retrieved file path to determine the final target.
@echo off
setlocal enableextensions disabledelayedexpansion
set "folderA=%cd%"
set "folderB=c:\temp"
set "folderC=c:\temp2"
pushd "%folderA%"
for /f "tokens=* delims=." %%a in ('xcopy . "%folderB%" /dehyil ^| find ".\"') do for %%b in ("%folderC%%%a") do (
echo %%b
if not exist "%%~dpb" mkdir "%%~dpb" >nul
copy /y ".%%a" "%%~dpb" >nul
)
popd
|
i have two Directories (with sub-directories), A and B. B is a old copy of A, so it's likely that there are:
1) Files in A, but not in B
2) Files in B, but not in A
3) Files in A with a newer 'last modified' Timestamp
I want to identify all of these by copying them into another Directory, C
I've tried looping over all files in A an calling another batch to compare timestamps, but i wasn't able to find the corresponding file in B.
Thank you for your help.
Edit: In other words: "B" is my Backup of "A". Now I want to create a differential Backup in "C"
|
Batch - Copying Files depending on their timestamp
|
setlocal enabledelayedexpansion
SET BACKUP_DIR=C:\backup_dir\
SET TIMESTAMP=%DATE:~-4,4%_%DATE:~-7,2%_%DATE:~-10,2%_%TIME:~-11,2%.%TIME:~-8,2%
FOR /F "tokens=1-3* delims=," %%A IN (list.csv) DO (
SET DEST_DIR="%BACKUP_DIR%%%C"
echo f | xcopy /f /y %%B%%C%%A !dest_dir!%%~nA_!TIMESTAMP!%%~xA
)
PAUSE
|
I am new to writing windows batch scripts. I am trying to create a batch file that reads Filename and SourcePath from a CSV file & copies the file to a destination path. Destination path is partiall fixed and part of it comes from the source path. I am able to do it if i specify the destination path too in the CSV but i want to get that from the source path value.
A row in input file looks like this:
filename,C:\parent_dir\,path
This is what i have tried so far.
SET backup_dir=C:\backup_dir\
FOR /F "tokens=1-3* delims=," %%A IN (list.csv) DO (
SET dest_dir="%backup_dir%%%C"
xcopy %%B%%C%%A %dest_dir% /e
PAUSE
)
PAUSE
This reads the file from parent dir properly but does not copy it to dest dir. Instead the files are being copied to the location where batch file is. I also have to add a timestamp to the file which is being backed up. I can get the timestamp value using this:
SET TIMESTAMP=%DATE:~-4,4%_%DATE:~-7,2%_%DATE:~-10,2%_%TIME:~-11,2%.%TIME:~-8,2%
but adding it to the filename before the extension is not working. { I wish to change the filename from file.extn to file_timestamp.extn. } The extension of files can be different and therefore i cannot hardcode it.
Appreciate your help here.
|
Windows Batch File To Backup Files
|
This uses:
robocopy inside a subroutine to get the current date (an error is forced to get the time stamp of the error in yyyy/mm/dd format)
mountvol to enumerate the defined drives
vol to test drive accesibility
It will search for a "flag" folder (the backup folder) in the drives to determine which to use
Once all the information is retrived, the adecuated robocopy command is used.
@echo off
setlocal enableextensions disabledelayedexpansion
call :getDriveLetter "\Backup\bk*" drive
if errorlevel 1 (
echo Drive not found
goto :eof
)
call :getTodayDate today
if errorlevel 1 (
echo Date retrieval error
goto :eof
)
set "BKDate=%today:/=_%"
set "source=c:\users\%username%\Dropbox"
set "target=%drive%:\Backup\BK_%BKDate%"
robocopy "%source%" "%target%" /e
dir /s "%target%" > "%drive%:\Backup\LOG_%BKDate%.txt"
endlocal
exit /b
:getTodayDate returnVar
set "%~1=" & for /f %%a in (
'robocopy "|" "." "%~nx0" /njh /r:0 /nocopy /l'
) do set "%~1=%%a" & exit /b 0
exit /b 1
:getDriveLetter folderToTest returnVar
set "%~2=" & for /f "tokens=1 delims=: " %%a in (
'mountvol ^| find ":\"'
) do vol %%a: >nul 2>&1 && (if exist "%%a:%~1" set "%~2=%%a" & exit /b 0)
exit /b 1
|
I’m trying to make this batch script automatic. The drive letter changes and i cant find a way in window to force it to use the same one each time. im not using the %date% environment variable because i need the date format to be like this: "YYYY_MM_DD"
Is there any way I can i get this script to run without user input?
@echo off
set /p "Drive_Letter=Enter Drive letter.>"
set /p "BKDate=Enter date.>"
cd\
%Drive_Letter%:
cd %Drive_Letter%:\Backup\
md BK_%BKDate%
cd\
Robocopy /E c:\users\%username%\Dropbox\ %Drive_Letter%:\Backup\BK_%BKDate% *
cd %Drive_Letter%:\Backup
dir /s %Drive_Letter%:\Backup\BK_%BKDate%% >> LOG_%BKDate%.txt
|
how can i get this script to run without user input
|
Well, there's this ( http://www.sqlskills.com/blogs/paul/new-script-how-much-of-the-database-has-changed-since-the-last-full-backup/ ). I'm just trying to figure out what problem you're trying to solve. That is, if you find that the size is below some threshold, it will be (by definition) cheap to do.
|
I am working with a data warehouse with SQL Server 2012 and was wondering what would be the most optimized, automated procedure for a backup/restore strategy.
Current observations and limitations:
1) Cannot use transaction logs as it would affect my load performance - datasets are potentially huge with large transactions
2) Current plan is to do full backup every week and differential backup every day
I am not sure when DML operations will happen as it depends on my application's usage, but is there a way to just track the NUMBER of changes to a database that would trigger a differential backup? A way that would not affect performance? I do not want to be taking unnecessary differential backups.
Would Change tracking be a good solution for my scenario? Or would there be overhead involved? I do not need to know the actual data that was changed, just the fact that it was changed by a certain amount.
Thanks in advance!
|
Optimized Way of Scheduling a Differential Backup
|
0
Yes when the leader server is down one of the follower server
becomes master automatically .In zookeepers an ensemble of 2N+1
servers can take care of N server failure . For creating an ensemble
check :
what is zookeeper port and its usage?
The delay depends on the parameter u set in /con/zoo.cfg file .The main parameter is the "ticktime" .This parameter is a kind of heartbeat parameter betwwen the servers.For sync internally it uses Zab protocol .You can check
Zookeeper Working
Distributed Application using Zookeeper
Share
Follow
edited May 23, 2017 at 12:08
CommunityBot
111 silver badge
answered Aug 17, 2014 at 6:41
Sandeep DasSandeep Das
1,01099 silver badges2222 bronze badges
Add a comment
|
|
I am new to Zookeeper and have some doubts:
when a leader server is down, what's the backup strategy? does some random follower server become leader server automatically?
how much delay will the population of change from the leader server to the follower server? this is due to that the write operation only happens to the leader server and it will populate to the follower server with next state. I am just wondering what's the strategy of sync between leader server and follower servers and how often this happens?
|
zookeeper synchronization and backup strageties
|
0
The Duplicator Plugin gives WordPress administrators the ability to migrate, copy or clone a site from one location to another. The plugin also serves as a simple backup utility. The Duplicator supports both serialized and base64 serialized string replacement. If you need to move WordPress or backup WordPress this plugin can help simplify the process.
use this plugin
https://wordpress.org/plugins/duplicator/
Share
Follow
answered Aug 13, 2014 at 4:55
Boopathi RajanBoopathi Rajan
1,2101515 silver badges3838 bronze badges
Add a comment
|
|
I need help understanding the ins and outs of backing up WordPress. What is the best method to backup WordPress.org files and database? Manually through phpmyadmin -or- a backup plugin?
.
Can backups done MANUALLY result in corrupted backups? Can backups from PLUGINS result in corrupted backups?
Does backup plugins make your website run slower?
Is UpdraftPlus any good? Any drawbacks?
.
I understand that it is necessary to backup the WordPress DATABASE, but why is it necessary to back up WordPress FILES? Is it for people who make changes to the PHP files within the WordPress dashboard?
Do I need to back up wordpress FILES if I make changes to my theme in a local server then upload to host?
|
How to Back Up wordpress.org?
|
Further investigation and "Googling" solved the issue, The issue was with drive, It was compressed and SQL Server did not like that.
Solution
Removed the compression from Drive and this time backup completed without any errors and much faster.
|
I am trying to take a Full Backup of a database size about 75GB.
It is SQL Server 2000 - 8.00.2055 x86 version. I have plenty of disk space where the backup is being created. So disk space isn't an issue.
The backup process starts fine but half way through it errors out just saying with status = 33.
I have looked into the Error Log and it is showing the error command the Enterprise Manager executed to take the backup with exactly the same error message, Status 33, which doesn't really help :S
I have been looking for any information online but couldn't find anything.... Any suggestions any solutions any pointers in the direct direction is much appreciated.......
|
SQL Server 2000 Backup fails with Status = 33
|
1
Have you seen the Backup Manager?http://developer.android.com/guide/topics/data/backup.html
Here is an article on how to use it with Gson.
https://advancedweb.hu/2015/01/06/efficient-sqlite-backup-on-android/
Not sure if this is the best solution but this will allow you to pick and choose the columns you want to restore.
Share
Follow
answered Apr 16, 2015 at 20:59
SammyTSammyT
75911 gold badge1010 silver badges1919 bronze badges
Add a comment
|
|
Is there any Android Official way of doing sql db backup to SD or Phone? there are several backup helper classes mentioned in the document but all of 'em are pretty useless.
I know that the sql file is a simple db file i can use Java IO class to read and write somewhere. but that is not a wise idea if we have latest version of App which contains several newly added columns in it. we cannot restore back.
Are there anyway to overcome this issue? Any easy way that so called android engineers provided in the document?
|
Android SQL Backup
|
0
Personally I think you should explore using AWS's S3. The better (S)FTP clients can all handle S3 (Cyberduck, Transmit, etc.), the API is friendly if you want to write a script, there is a great CLI suite that you could use in a cron job, and there are quite a few custom solutions to assist with the workflow you describe. s3tools being one of the better known ones. The web UI is fairly decent as well.
Automating the entire lifecycle like you described would be a fairly simple process. Here's one process for windows, another general tutorial, another windows, and a quick review of some other S3 tools.
I personally use a similar workflow with S3/Glacier that's full automated, versions backups, and migrates them to Glacier after a certain timeframe for long-term archival.
Share
Follow
answered Sep 1, 2014 at 14:14
brennebeckbrennebeck
45044 silver badges1111 bronze badges
Add a comment
|
|
I've inherited a couple of web servers - one linux, one windows - with a few sites on them - nothing too essential and I'd like to test out setting up back-ups for the servers to both a local machine and a cloud server, and then also use the cloud server to access business documents and the local machine as a back-up for these business documents.
I'd like to be able to access all data wherever I am via an internet connection. I can imagine it running as follows,
My PC <--> Cloud server - access by desktop VPN or Web UI
My PC <--> Web Servers - via RDP, FTP, Web UI (control panels) or SSH
My PC <--> Local Back-up - via RDP, FTP, SSH or if I'm in the office, Local Network
Web servers --> Local Back-up - nightly via FTP or SSH
Cloud Server --> Local Back-up - nightly via FTP or SSH
Does that make sense? If so, what would everyone recommend for a cloud server and also how best to set up the back-up server?
I have a couple of spare PC's that could serve as local back-up machines - would that work? I'm thinking they'd have to be online 24/7.
Any help or advice given or pointed to would be really appreciated. Trying to understand this stuff to improve my skill set.
Thanks for reading!
|
Where to begin with managing web servers / business document file management
|
It appears to be hidden on the developer portal, however there is a data export API, which is available to paid networks. You will need to use API credentials from a verified admin account to execute the API. Normal user accounts are unable to execute the data export endpoint.
|
How can I export all threads - including attachment - from a yammer-network ?
Background
we have used the free version of yammer for a while - and it has now been decided to use a paid version. Because of that I need to backup all post/images/etc on our existing network.
But so far I have been unable to find a suitable tool to do this - and the export utility is not available for a free instance (which will be closed down eventually) ?
plaease advice - thnx in advance
|
Export content from a yammer network
|
0
As you said: "... take the backup of controlfile ... configure the controlfile autobackup on ... after taking the backup... delete controlfile thinking i ll recover it"
At the time you backup controlfile - the current setting of "CONTROLFILE AUTOBACKUP" is still "OFF", you change it into on, drop it, then restore, leading to the status is still "OFF".
If you want to keep it ON - just set it before taking backup.
Share
Follow
answered Mar 22, 2019 at 8:18
DuongDuong
47555 silver badges1313 bronze badges
Add a comment
|
|
i was backing up the control file though RMAN.
i put the database in mount mode and take the backup of controlfile .
i configure the controlfile autobackup on.by default it was off. after taking the backup. when i open the sqlprompt and put database in nomount mode, and delete controlfile thinking i ll recover it ,AGAIN GOING TO RMAN PROMPT . when i fire command SHOW ALL;
CONTROLFILE AUTOBACKUP OFF; # DEFAULT
MY QUES IS WHY CONTROLFILE AUTOBACKUP IS GOING OFF WHEN I TURNED IT ON.
And how to make changes permananent.
|
Oracle Rman configuration parameter
|
0
You may want to look into using rsyslog on the linux servers to send logs elsewhere. I don't believe you can configure it to delete logged lines with a verification step - I'm not sure you'd want to either. Instead, you might be best off with an aggressive logrotate schedule + rsyslog.
Share
Follow
answered Aug 2, 2014 at 8:09
carl.andersoncarl.anderson
1,0601111 silver badges1616 bronze badges
Add a comment
|
|
I have multiple Linux servers with limited storage space that create very big daily logs. I need to keep these logs but can't afford to keep them on my server for very long before it fills up. The plan is to move them to a central windows server that is mirrored.
I'm looking for suggestions on the best way to this. What I've considered so far are rsync and writing a script in python or something similar.
The ideal method of backup that I want is for the files to be copied from the Linux servers to the Windows server, then verified for size/integrity, and subsequently deleted from the Linux servers. Can rsync do that? If not, can anyone suggest a superior method?
|
Moving files from multiple Linux servers to a central windows storage server
|
0
You need to walk the tree. You need to create a recursive function iterates through the sub-directories of directories. For example
Here is the crux of it:
Private Sub RecurseDirectories(ByVal di As DirectoryInfo)
Try
For Each d In di.GetDirectories()
'Do stuff with the d (directory) here.
RecurseDirectories(d) 'get sub directories of this directory
Next
Catch
End Try
End Sub
Share
Follow
edited May 23, 2017 at 10:33
CommunityBot
111 silver badge
answered Jul 28, 2014 at 15:45
DonalDonal
31.8k1010 gold badges6464 silver badges7373 bronze badges
Add a comment
|
|
First time poster here so please be gentle.
I’m using Visual Studio Express 2010
I’m trying to create a backup program to backup specific folders and files each night to an external hard drive.
I’m using:
Dim dirs As List(Of String) = New List(Of String)(Directory.EnumerateDirectories(dirPath))
To get a list of directories on my local drive and then:
For Each folder In dirs
To cycle through all the directories and then:
If Not Directory.Exists(FdriveDirName) Then
My.Computer.FileSystem.CopyDirectory(CdriveDirName, FdriveDirName, True)
End if
This works fine for the first copy. My problem is when I create a new folder within a folder in the root directory (for example installing a new program would create a new folder in program files in c:) This new folder is NOT copied the 2nd time I run the program.
I can see the reason behind this is that the EnumerateDirectories function only lists folders one level down in the folder hierarchy.
I clearly need to list all folders AND subfolders but even replacing EnumerateDirectories with getdirectories hasn’t helped me.
Any one with any ideas?
|
VB.net Backup Program that lists subfolders
|
ok
you should to use sql agent
In Object Explorer, Connect to SQL Server, Expand 'SQL Server Agent' node, Expand Jobs; right click ; select menu 'New Job'
Type in name of the SQL Agent Job
Create a backup job Step
Click on 'New' to create a new job step
Type in name for job step and T-SQL statement to backup database
and in last step you can click OK to save or click on "Script" to generate script and use it in your program.
|
What is T-Sql script for backup maintenance plan? I want to manage that from app.
actually, i want to Configure Automatic Backups with Task Scheduler with T-Sql script from app .
Thanks.
|
T-Sql Script for SQL Server Backup Maintenance Plan?
|
0
One way of doing it through cron, is to simply back up the data folder of the collection as well as the instance folder with all the configs. If you take the data folder and stick it into another collection with same config it will work fine.
However I am not sure what the impact is if Solr is running at the time of backup, you may want to experiment with that.
Share
Follow
answered Jul 27, 2014 at 20:16
nick_v1nick_v1
1,66411 gold badge1818 silver badges2929 bronze badges
Add a comment
|
|
I've installed and configured the last Solr release - Solr 4.9.
It contains more then 10000000 articles and it works perfect at the moment.
But I worried about data loss and I want to back up my Solr indexes so that I can recover quickly in case of catastrophic failure.
I've waste a lot of time to find solution or great documentation but without result.
I've added following strings in my solrconfig.xml:
<requestHandler name="/replication" class="solr.ReplicationHandler" >
<lst name="master">
<str name="replicateAfter">optimize</str>
<str name="backupAfter">optimize</str>
<str name="confFiles">schema.xml</str>
<str name="commitReserveDuration">00:00:10</str>
</lst>
<int name="maxNumberOfBackups">2</int>
</requestHandler>
and opened following url in browser:
http://mydomen.com:8983/solr/#/collection1/replication?command=backup&location=/home/anton
but backup wasn't created.
What's wrong in my configuration? Can I make a backups by cron?
Regards, Anton.
|
How to backup indexes in Solr 4.9?
|
Be aware of symbolic links (aka symlinks)! They get rerouted to the live system - even if you use them in a backup path!
In my case I wanted to restore a file from a web project. Since my server administration tool ISPconfig generates a hierachical structure and shorter symlinks I am used to take advantage of those symlinks without even thinking about it any more.
The actual server path is:
/var/www/clients/clients1/web1/web/
but normally I use the more convenient symlink:
/var/www/domain.ext/web/
If you use the symlink in your mounted backup you will end up in the live system. Since you are accessing it from the system where those links refer to the live system:
/mnt/backup/daily0/localhost/var/www/domain.ext/web/
leads you to to:
/var/www/domain.ext/web/
but you are actually seeing the path to the backup you typed in above.
If you want to reach the actual backup content you need to use the real path without symlinks:
/mnt/backup/daily0/localhost/var/www/clients/client1/web1/web/
You can check for symlinks by using the ls command with a more elaborate output like "ls -l".
|
I have setup rsnapshot with a cron job and stored 7 daily backups of my online server to a backup storage. Everything seemed to went fine since I could access everything in my backups I mounted via NFS.
One day I actually needed to restore a lost important file from my backup. Unfortunately each daily backup just showed me the content of the live system. Every change I made on the live server was instantly done in any backup as well.
It seemed like there was no real backup, but only alternative link structures pointing to the same live-system. This for sure wasn't what I needed.
|
Why points rsnapshot backup to live-system?
|
Unfortunately I could not get around the setuid error however I found a different implementation that allows me to achieve the same results with my systems.
Here is the guide that I followed.
The crux of this solution involves using DiskShadow in conjunction with Rsyncd and it requires Diskshadow Scripts to run as part of the backup process.
|
I am trying to implement rsyncd (through BackupPc) on a Windows 2002R2 server which already has cygwin on it (for accessing mail logs). I normally use a lighter installation with just the cygwin1.dll and rsyncd.exe plus the config files (rsyncd.conf, rsyncd.lock, rsyncd.log & rsyncd.secret) and install as a service so that it can be triggered by my remote BackupPc server but that approach doesn't work here as the server already has a cygwin installation.
I installed the rsycd package through the cygwin installation, set it up as a service (following this guide) and configured it to work with my BackupPc server.
Pings from the server are okay and I know it passes authentication (as I originally has the path to rsyncd.secrets wrong) but now it presents me with the error:
2014-06-26 13:03:01 full backup started for directory cDrive
2014-06-26 13:03:01 Got fatal error during xfer (setuid failed)
2014-06-26 13:03:06 Backup aborted (setuid failed)
The user is privileged and I have not received this error with the light installation method (mentioned above) in the same OS environment.
|
Got fatal error during xfer (setuid failed) - Backuppc cygwin rsyncd
|
svndumpfilter has nothing common with revision ranges,
svnadmin dump can be used with -r LOWER:UPPER option,
svnlook youngest give you latest revision in repo.
|
I have a svn repository that I backup with
svnadmin dump myrepo | gzip -9 > myrepo-$(date +"%Y-%m-%d-%T").dump.gz
hourly (I'm a little crazy).
There is a way for dump the last only 10 or 20 revision of the svn?
Thanks
|
svnadumpfilter only the lasts 10 revision
|
0
You can change the database recovery model to bulk logged with the following command:
ALTER DATABASE [database name] SET RECOVERY BULK_LOGGED;
Share
Improve this answer
Follow
answered Jun 23, 2014 at 12:25
Dan GuzmanDan Guzman
44.7k33 gold badges4848 silver badges7474 bronze badges
Add a comment
|
|
What is the query to bulk log backup a database
I have the following for SIMPLE AND FULL, is it something similar to this?
USE [master]
GO
ALTER DATABASE [database name] SET RECOVERY FULL
GO
exec [dbo].[up_DBA_Create_Jobs_op] [database name]
GO
|
Bulk Log Database query T-SQL
|
This downloads http but also from ftp. It can upload at least http (never tried ftp), see the docs. To upload change the verb and file.send uploaddata.
On Error Resume Next
Set File = WScript.CreateObject("Microsoft.XMLHTTP")
File.Open "GET", "http://www.pepperresources.org/LinkClick.aspx?fileticket=B1doLYYSaeY=&tabid=61", False
'This is IE 8 headers
File.setRequestHeader "User-Agent", "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0; SLCC1; .NET CLR 2.0.50727; Media Center PC 5.0; .NET CLR 1.1.4322; .NET CLR 3.5.30729; .NET CLR 3.0.30618; .NET4.0C; .NET4.0E; BCD2000; BCD2000)"
File.Send
If err.number <> 0 then
line =""
Line = Line & vbcrlf & ""
Line = Line & vbcrlf & "Error getting file"
Line = Line & vbcrlf & "=================="
Line = Line & vbcrlf & ""
Line = Line & vbcrlf & "Error " & err.number & "(0x" & hex(err.number) & ") " & err.description
Line = Line & vbcrlf & "Source " & err.source
Line = Line & vbcrlf & ""
Line = Line & vbcrlf & "HTTP Error " & File.Status & " " & File.StatusText
Line = Line & vbcrlf & File.getAllResponseHeaders
wscript.echo Line
Err.clear
wscript.quit
End If
On Error Goto 0
Set BS = CreateObject("ADODB.Stream")
BS.type = 1
BS.open
BS.Write File.ResponseBody
BS.SaveToFile "c:\users\test.txt", 2
|
hi friends can any one help me script to upload & download Data from FTP and if its possible to create a log report
thank you friends
i would be very thankful to you if help
|
want script to upload download files from FTP
|
0
Iperius Backup seems to do the trick.
Aaaand now I look like a sales agent.
Share
Improve this answer
Follow
answered Jun 10, 2014 at 13:05
Emily AdlerEmily Adler
7322 silver badges1212 bronze badges
Add a comment
|
|
I am looking for a program that will make scheduled backups to a specific ftp-server/folder. I have tried:
Create Synchronizity
Great for local backup and has scheduling, but lacks ftp. Can not recognize a mapped ftp-drive letter.
AceBackup
Has both scheduling and ftp backup, but the scheduling is not built-in but tries to utilize the scheduler in windows. And fails.
EaseUS Todo Backup
No ftp.
Any suggestions?
|
Software: files backup to ftp-server and built in scheduling?
|
0
There are so many way you can approach this problem. Here is one way:
You can schedule a job on each computer that runs a script which checks the status code of the backup job and if it detects failure send an email.
Now? How do you get the task results? You might use something like this (not tested)
$s = New-Object -com Schedule.Service
$s.connect
$au = $s.getfolder('').gettasks(0) | where {$_.name -match 'automaticbackup'}
if ( $au.LastTaskResult -ne 0) {
##send email
}
Depending on the version of the PowerShell you can, for example, use 'send-email' cmdlet.
Hope this helps get you started.
Share
Improve this answer
Follow
answered Jun 10, 2014 at 20:26
Adil HindistanAdil Hindistan
6,48144 gold badges2626 silver badges2828 bronze badges
Add a comment
|
|
I have looked around and not found anything about remotely checking Windows 7 backup status.
We have Windows 2008 R2 SBS running our domain with 5 Windows 7 client computers. Each client computer is backing up to a NAS (some programs we have are a huge pain to re-install if a hard drive dies, so we have a system image of each). I would like to run a PowerShell script that checks each client computer for a successful backup and if one has failed, send an email.
What I need help with the most is the part to query each computer for backup status.
|
Script to get Windows 7 backup status of multiple client computers and send email if one fails
|
0
I suppose you have a local http/php server, so in case you don't need to batch import or export information, I suggest you use a database manager app that can import or export as sql, csv or tsv files.
I use a web-based admin tool called Adminer and it works great (plus, it's just a single php file). It has options to export or import a whole database or just certain tables and even specific registers. Its usage is pretty straightforward.
Share
Improve this answer
Follow
answered Jun 4, 2014 at 6:00
arielnmzarielnmz
8,74599 gold badges3838 silver badges6969 bronze badges
Add a comment
|
|
I'm trying to use percona xtrabackup to backup a mysql database. in the restoring the database according to the documentation:
rsync -avrP /data/backup/ /var/lib/mysql/
this will copy the ibdata1 as well.
what if I have want to restore the backup into an existing mysql instance with some some existing databases? would this corrupt my other databases? clearly this will overwrite existing ibdata1.
|
percona backup and restore
|
0
Created a scheduled job, and look at the following to create the backup http://www.sqlexamples.info/SQL/tsql_backup_database.htm.
It makes use of BACKUP DATABASE @db_name TO DISK = @fileName;
Share
Improve this answer
Follow
answered Jun 3, 2014 at 7:48
3dd3dd
2,5101313 silver badges2121 bronze badges
1
Remember, the user running your program will need permission to write on @fileName.
– user6490459
Mar 27, 2018 at 12:29
Add a comment
|
|
Hey anyone know how to take automatic backup of a SQL Server database once a day (or it may be daily or periodic bases)?
If you know any configuration of SQL Server then please tell me.
Or you may have solution by using a C# / .NET Windows application, then please also tell me.
|
How to configure SQL Server 2008 to take automatic backup daily?
|
0
Redirecting output from the tar command to DEFAULTDIRECTORY isn't doing what the comment specifies.
I think what you want to do is save the file in the DEFAULTDIRECTORY.
Change the line
tar -vczf ${FNAME}-${TIMESTAMP}.tar.gz ${Chosendata} > ${DEFAULTDIRECTORY}
to
tar -vczf $DEFAULTDIRECTORY/${FNAME}-${TIMESTAMP}.tar.gz ${Chosendata}
Share
Improve this answer
Follow
answered Jun 3, 2014 at 6:11
AlexAlex
2,00311 gold badge1616 silver badges2525 bronze badges
Add a comment
|
|
I'm trying to gzip a file using a script but it will not work and continues to throw errors. Can someone please giveme some guidance on what is wrong with this script?
DEFAULTDIRECTORY=”/Backup”
if [ -d "$DEFAULTDIRECTORY" ]; then
mkdir -p /backup
fi # Makes directory if the directory does not exist
# Set the timestamp for the backup
TIMESTAMP=`date +%Y%m%d.%H%M`
# let the user choose what they want to backup
echo -n "Select the file or directory you want to backup"
read Chosendata
# read the backup file name file
echo -n "Select the file name"
read FNAME
# start the backup.
echo -e "Starting backup"
# compress the directory and files, direct the tar.gz file to your destination directory
tar -vczf ${FNAME}-${TIMESTAMP}.tar.gz ${Chosendata} > ${DEFAULTDIRECTORY}
# end the backup.
echo -e "Backup complete"
|
Bash backup wont work
|
0
This powershell script works... save as .ps1
Function GET-SPLITFILENAME ($FullPathName) {
$PIECES=$FullPathName.split(“\”)
$NUMBEROFPIECES=$PIECES.Count
$FILENAME=$PIECES[$NumberOfPieces-1]
$DIRECTORYPATH=$FullPathName.Trim($FILENAME)
$baseName = [System.IO.Path]::GetFileNameWithoutExtension($_.fullname)
$FILENAME = [System.IO.Path]::GetFileNameWithoutExtension($_.fullname)
return $FILENAME, $DIRECTORYPATH
}
$Directory = "\\PSFS03\MyDocs$\Abbojo\Insight Software"
Get-ChildItem $Directory -Recurse | where{$_.extension -eq ".txt"} | % {
$details = GET-SPLITFILENAME($_.fullname)
$name = $details[0]
$path = $details[1]
copy $_.fullname $path$name"_backup".txt
}
Share
Improve this answer
Follow
answered Jun 23, 2014 at 3:55
JonJon
122 bronze badges
Add a comment
|
|
here's what I am trying to do.
I have a few hundred users My Documents folders in which most(not all) have a file(key.shk for shortkeys program).
I need to upgrade the software but doing so makes changes to the original file.
I would like to run a batch file on the server to find the files in each My Docs folder and make a copy of it there called backup.shk
I can then use this for roll back.
The folder structure looks like this
userA\mydocs
userB\mydocs
userC\mydocs
My tools are xcopy, robocopy or powershell
Thanks in advance
|
Batch file to find a file, create a copy in the same location and run this on multiple directories
|
0
It's because this out of sync, is recommended you do the backup by Admin console, using the tools Backup! This feature will syncronize all database of TFS, including Reports, Sharepoint, etc..
Share
Improve this answer
Follow
answered Jun 1, 2014 at 3:16
egomesbrandaoegomesbrandao
78866 silver badges1616 bronze badges
Add a comment
|
|
I wonder that about this problem: tfs_configuration & my tfs_testcollection in TFS Admin Console and SQL MS.
What is recommended solution for backups and restore tfs_configuration in TFS Admin Console ? I ask because sometimes it happens that collection and configuration are out of sync ? Could you recomend best solution for preventing these problems? Any msdn site or books or blog with solutions ?
Thank you in advance.
Lucas.
|
TFS Admin Console
|
Something like this should work:
tar czf backup.tar.gz `ls /www/hosting | grep \.com$ | sed 's/$/\/www/g' | sed 's/^/\/www\/hosting\//g'`
|
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I have a VPS, there are tens websites and I need to make a regular backups, let´s say twice in a week (sunday 11pm and Wednesday 11pm).
I have only minimal experience with server management, and user exp. with Linux (I tried to play with linux for 2 years).
The files I need to backup are in /var/www/hosting/webXX/www/ dirs (XX means web1, web2, ... web50). EDIT: in webXX dir is more dirs, I need only this one (www).
I tried to find a bash script for that, but with no result. In bash scripting I have no exp. Then I only call the script with cron.
On Mondays and Thursday I´d like to download one zipped file to my computer (manually, if it won´t be possile automatically).
Thanks.
Roman
EDIT:
okey, I tried solutions without bash.
In command line via ssh,
ssh root@server '( cd /var/www/hosting/web*/www/ && tar cfz - . )' > backup.tar.gz
It´s not automatically (like bash script), I have to start that manually.
Problem: it takes me only the first web, not recursively all of them.
Any idea?
|
Backup all web files from VPS [closed]
|
0
SDN 3.1.0 should not need to create legacy indexes in your database except for fulltext indexes. It only creates and uses schema indexes by default otherwise, which it also recreates at startup if they are missing.
How do you "search for something"
Share
Improve this answer
Follow
answered May 27, 2014 at 10:15
Michael HungerMichael Hunger
41.5k33 gold badges5757 silver badges8080 bronze badges
0
Add a comment
|
|
I´m running the community edition of neo4j (2.0.1). With spring-data-neo4j 3.1.0.RELEASE.
I have no automatic index configured in my neo4j server. spring-data-neo4j is doing the work for me.
After shutting down the neo4j service, i made a copy of the data folder and tried to replace it with the one in my local environment.
After starting the local server all the data is there. I can see the list of my index at this address:
http://localhost:7474/db/data/index/node/
When I try to search for something in the index the result is "No index hits".
In my backup folder the index folder is there. Is there something else to do in order to backup the whole database including indexes?
update:
this is my annotation for the fulltext search:
@Indexed(indexType = FULLTEXT, indexName = "title-search", unique = false)
private String title;
And here my implementation:
Index<Node> title = template.getIndex("title-search", Application.class);
IndexHits<Node> nodeIndexHits = title.query("title", query);
I´m querying the index directly and not using the repository method to avoid the label fetching for the obj mapping of SDN.
|
After neo4j backup no index hit
|
You should not use " in your set statements. This will put the double-quotes into the actual result. Assuming that your first values were parsed correctly, when you next construct date, the result will be:
""26"."05"."2014""
Then, next time "%date:~-4,4%" will give you "14""".
Remove all the quotes from set statements and try again. If you still have issues, you may need to look into delayed variable expansion. Check out the setlocal and endlocal commands.
|
So I've been trying to make a automatic backup and date stamp bat program for a folder and its contents. The first time it loops it does exactly what i want it to. But the second time the loop runs, it changes the folder by removing the first 3 numbers and the 0 in 2014.
It looks like this.
First loop C:\users\username\desktop\05.26.2014\17.11\contents(This is right)
Second loop C:\user\username\desktop\6.2.14\17\contents
Third loop C:\users\username\desktop\2.1\no time folder\contents
There is a time sub folder in the date folder it is also affected by this until it does not generate anymore.
Can anyone tell what is causing this, here is what i have in the bat file
@echo off
set /a x=0
:loop1
timeout /t 600
set day="%date:~-10,2%"
set month="%date:~-7,2%"
set year="%date:~-4,4%"
set hour="%time:~-11,2%"
set minute="%time:~-8,2%"
set time="%hour%.%minute%"
set date="%day%.%month%.%year%"
echo d | XCOPY Z:\copydirectory "G:\pastdirectory" /e
echo Loop number -^>%x%
set /a x=%x%+1
if %x% NEQ 10000 goto loop1
pause
Thanks to anyone who answers.
Edit: changed
variable time to T
and variable date to D
That seems to have fixed it.
|
Automatic date stamp backup bat file changes stamp by itself after second loop
|
0
I strongly recommend you to keep track of user's data on your server.
That way, even if the user deletes the app, the data will still be available after retrieving it from the server. Also, this will make you able to sync the same data between different platforms, not only devices.
After you retrieve the data, you might want to store it in NSUserDefaults and updated when needed.
Share
Improve this answer
Follow
answered May 22, 2014 at 6:46
SebydddSebyddd
4,30522 gold badges3939 silver badges4343 bronze badges
Add a comment
|
|
I have implemented in-app-purchases like Mission packs or "full version" before. However, I am now looking into selling in-game credits.
What are some ways of keeping track of spent and total credits, even after removing and reinstalling the app
Is it common to sync these totals between the different iOS devices of a single user? Or should a user re-buy credits on different devices?
Should I have a user register with my server and track the credit on there?
|
Backing up in app purchases like gold or credits
|
Finally i founded a code who answers my question.
SET FECHA=%date:~6,4%%date:~3,2%%date:~0,2%
SET DESTDIR=D:\BackupBBDD\CopiasBBDD\
@rem verify folders and copy last file.
@echo off
setlocal
set srcDir=D:\BackupBBDD\COMPANY1
set lastmod=
pushd "%srcDir%"
for /f "tokens=*" %%a in ('dir /b /od 2^>NUL') do set lastmod=%%a
if "%lastmod%"=="" echo Could not locate files.&goto :eof
xcopy "%lastmod%" "%DESTDIR%"
...
set srcDir=D:\BackupBBDD\COMPANY6
set lastmod=
pushd "%srcDir%"
for /f "tokens=*" %%a in ('dir /b /od 2^>NUL') do set lastmod=%%a
if "%lastmod%"=="" echo Could not locate files.&goto :eof
xcopy "%lastmod%" "%DESTDIR%"
rem RAR and copy
rar a -m5 -df -y Backup_RAR_%FECHA%.rar CopiasBBDD
xcopy D:\BackupBBDD\Backup_RAR_%FECHA%.rar \\tsclient\D
|
i have 6 folders with Database backup files named as COMPANY_Backup_DATE.rar . I want to copy every last file into one folder, compress and copy to my PC.
SET FECHA=%date:~6,4%%date:~3,2%%date:~0,2%
rem Company1
XCOPY D:\BackupBBDD\COMPANY1\COMPANY1_backup_*.bak D:\BackupBBDD\CopiasBBDD\ /d /s
...
rem Company6
XCOPY D:\BackupBBDD\COMPANY6\COMPANY6_backup_*.bak D:\BackupBBDD\CopiasBBDD\ /d /s
rem rar and delete the folder
rar a -m5 -df -y Backup_RAR_%FECHA%.rar CopiasBBDD
rem copy to my pc
copy D:\BackupBBDD\Backup_RAR_%FECHA%.rar \\tsclient\D
Every time i execute this batch copies all files of every folder. Backup Files are created weekly, and the folder "CopiasBBDD" create ar begin of this script and deletes at the end.
|
CMD Copy last created items
|
0
I think you have two questions here.
Q1 is how should you keep a copy of your files on a remote server. The answer to that is rsync over ssh.
Q2. how to supply a password to ssh when you can't put your key on the remote server. This is answered here:
how to pass password for rsync ssh command
Hope that helps.
Share
Improve this answer
Follow
edited May 23, 2017 at 12:27
CommunityBot
111 silver badge
answered Jun 11, 2014 at 23:17
Jason LewisJason Lewis
1471111 bronze badges
Add a comment
|
|
I want to copy files from a source to a target unattended via bash script. I do have options to use sftp, ftp over SSL, rsync, WebDAV, CIFS
I do not have the option to install a SSH key pair on the target side (Strato HiDrive), so scp and sftp won't work, do they?
I have read about a scp -W option to store the Password in a file, but can't find some detailed information about…
any ideas?
|
How to copy files from source to target with a unattended bash script?
|
0
I don't think it is a good idea to archive data into a flat file. Consider to use partitioning of your tables. Differnet partitions can be stored in different tablespaces and thus also on different storages (even a tape storage would be possible in theory).
Share
Improve this answer
Follow
answered May 14, 2014 at 13:21
Wernfried DomscheitWernfried Domscheit
56.7k99 gold badges8383 silver badges117117 bronze badges
Add a comment
|
|
We have a requirement of archiving data in an oracle table, older than 6 months into a flat file or a db log automatically.
Is there a existing way oracle addresses this issue or we need to do some manual work for this?
Also can anyone suggest the different ways of addressing this archiving process like writing a batch program to fetch the records greater than 6 months and write it to a flat file or csv etc. Either it can be oracle backed solution like triggers, scheduled jobs or programmatic solutions(preferably in java)
Please help..
|
Archiving Oracle table data after a particular period of time
|
I am not sure there is one answer to this problem. In my case I save the information in the keychain. Other iOS SDKs such as Amazon's or Facebook's do the same thing as far as I can see.
|
I am currently making an app and there is an important piece of information I need to store. The user can make a one time in-app purchase.
My question is, what is the apple recommended or approved method for storing this?
No. 1 is most important to me. For example imagine the user can purchase 10 lives. After his purchase he will use some of them so imagine he now has a balance of 5. Where should this number be stored.
The issues or thoughts or random ideas I have as a result of reading things are;
if its saved in a simple file then a jailbreaker can just go in an
edit the file.
if its saved in an encrypted file I think I have extra issues with my
app/Apple/certain countries because I am using encryption.
what happens when the user accidentally removes the app. He cannot
restore his purchases as its a one time purchase
should I be and how should this important piece of information be
backed up on a sync
how do I ensure this information is saved as part of a backup.
|
Where to store a piece of important data (away from users) that will be backed up by itunes
|
0
Try this command in CMD
FC /B pathname1 pathname2
for more parameters check this http://ss64.com/nt/fc.html
Share
Improve this answer
Follow
answered May 9, 2014 at 6:17
KneerudgeKneerudge
551111 bronze badges
5
Isn't there any program with gui that could do that, im not very skillfull with cmd
– user3615166
May 9, 2014 at 6:35
in CMD it wont be tough still you can try "UltraCompare"
– Kneerudge
May 9, 2014 at 6:41
pathname1 and 2 should be full pathnames C:\etc?
– user3615166
May 9, 2014 at 6:46
and this compares only 2 files, not whole folders?
– user3615166
May 9, 2014 at 6:55
use like this: FC /B c:\amp\* c:\amp1\*
– Kneerudge
May 9, 2014 at 6:56
Add a comment
|
|
I need to compare 2 folders with files if they are exactly the same, and the files in one folder are not corrupt. I tried with Total commander but it works only with one file. I tried with Beyond compare and it didn't gave me any resoults :/ Any idea?
|
How to compare multiple files (by content, not date and hour) if they are exactly the same
|
0
Use phpMyAdmin to reload the database into mySQL.
Share
Improve this answer
Follow
answered May 8, 2014 at 22:01
a codera coder
54444 silver badges2323 bronze badges
Add a comment
|
|
I have a database backup that I'm trying to load so that I can extract some historical averages. I think it was a MySQL database, but with some syntax adjustments I was able to create the one and only table I need in Oracle 11g. However, I'm having problems with the INSERT INTO portion of the backup. Basically, some of these text fields were taken directly from fields on our website, and whenever users entered an apostrophe, it messes up everything that follows. Finding all instances of this would take a very long time...
Is there any way to handle this?
Also, all the text in SQL Developer runs horizontally on 2 or 3 rows. Is there any way to fix that? It makes for a lot of side-scrolling instead of vertical scrolling.
|
help me restore a database
|
0
According to the rdiff-backup man page, you use the destination directory along with the --remove-older-than option. The destination directory is the one that contains the rdiff-backup-data directory.
Besides the directory issue, you also have an incorrect time spec for the --remove-older-than option. Quoting the documentation,
time_spec can be either an absolute time, like "2002-01-04", or
a time interval. The time interval is an integer followed by
the character s, m, h, D, W, M, or Y, indicating seconds, min-
utes, hours, days, weeks, months, or years respectively, or a
number of these concatenated. For example, 32m means 32 min-
utes, and 3W2D10h7s means 3 weeks, 2 days, 10 hours, and 7 sec-
onds. In this context, a month means 30 days, a year is 365
days, and a day is always 86400 seconds.
If you are just running this once, you'll probably be removing multiple increments, in which case you'll also need the --force option.
Share
Improve this answer
Follow
edited Oct 16, 2014 at 7:09
Infinite Recursion
6,5392828 gold badges4040 silver badges5151 bronze badges
answered Oct 15, 2014 at 20:07
Jesse W at Z - Given up on SEJesse W at Z - Given up on SE
1,75011 gold badge1313 silver badges3535 bronze badges
Add a comment
|
|
I need to delete backups to open space on my server. Though I don't know what the directory I should specify is in the command to delete them rdiff-backup --remove-older-than 20B host.net::/remote-dir. So far my directory looks like this on the backup server.
\home\admin\server1\
Then I have a folder inside that called rdiff-backup-data. This is in addition to other folders, but is this the one I should direct the command to?
Thank you very much!
|
what is used for the directory in a rdiff backup command
|
0
Detailed instructions for backing up InformationServer 8.5 can be found here: 8.5 Backup
Later versions of Information Server do have a backup and recovery tool. This tool works against multiple versions of the DataStage product (back to 8.5). See the following link: InformationServer Backup/Recovery. Contact IBM support to obtain this tool.
Share
Improve this answer
Follow
answered Jul 21, 2014 at 18:47
FreddieFreddie
99711 gold badge1212 silver badges2424 bronze badges
Add a comment
|
|
I'm using Infosphere Datastage & Quality Stage 8.5.
I need to know how to backup the whole datastage environment including DB2 files, configurations, etc. to prevent crash event on servers.
Please provide with the documentation as well.
|
Datastage Backup Configuration
|
0
Since you indicated that you run a VPS, I assume you have shell access which gives you substantially more flexibility (as opposed to a shared webhosting plan where you can only interact with a web frontend and an FTP client). I'm pretty sure that rsync is specifically designed to do what you need to do (sync large numbers of files between machines, and do so efficiently).
This gets into Superuser territory, so you might get more advice over on that forum.
Share
Improve this answer
Follow
answered Apr 30, 2014 at 5:48
Multimedia MikeMultimedia Mike
12.9k55 gold badges4747 silver badges6262 bronze badges
1
Hi, yes I have shell access and I'll look into that thanks. I have heard of rsync but not really given it much attention yet. Thx.
– Andrew Simeou
Apr 30, 2014 at 8:13
Add a comment
|
|
I have a website that I host on a Linux VPS which has been growing over the years. One of its primary functions is to store images/photos and these image files are typically around 20-40kB each. The way the site is organised at the moment is all images are stored in a root folder ‘photos’ and under that root folder are many subfolders determined by a random filename. For example, one image could have a file name abcdef1234.jpg and that would be stored in the folder photos/ab/cd/ef/. The advantage of this is that there are no directories with excessive numbers of images in them and accessing files is quick. However, the entire photos directory is huge and is set to grow. I currently have almost half a million photos in tens of thousands of sub-folders and whilst the system works fine, it is fairly cumbersome to back up. I need advice on what I could do to make life easier for back-ups. At the moment, I am backing up the entire photos directory each time and I do that by compressing the folder and downloading it. It takes a while and puts some strain on the server. I do this because every FTP client I use takes ages to sift through all the files and find the most recent ones by date. Also, I would like to be able to restore the entire photo set quickly in the event of a catastrophic webserver failure so even if I could back up the data recursively, how cumbersome would it be to have to upload each back stage by stage?
Does anyone have any suggestions perhaps from experience? I am not a webserver administrator and my experience of Linux is very limited. I have also looked into CDN’s and Amazon S3 but this would require a great deal of change to my site in order to make these system work – perhaps I’ll use something like this in the future.
|
What solutions are there to backup millions of image files and sub-directories on a webserver efficiently?
|
0
Ok, if you have PSv3 or higher you can remove the $_.PSIsContainer -and and instead add -Directory to the GCI command which should help speed it up by filtering at the provider level instead of afterwards.
This will stop it from recursing everything and will just pull all the folders in the root folder, and check to see if they have a subfolder with the desired name on it. That should speed things up considerably.
$zip = "C:\apps\7-zip\7z.exe"
$days_behind = -1
$folder_data = (Get-Date).AddDays($days_behind).ToString("yyMMdd")
$archive = "X:\SHARE_ARCH\Archive_$folder_data.zip"
$to_zip = gci X:\SHARE_ROOT | ?{ $_.PSIsContainer -and (test-path "$_\$folder_data")} | Select -Expand FullName
$options = "a", "-tzip", "-y", $archive, $to_zip
& $zip $options
I also removed the parenthesis and used a Select -expand command instead. I don't know that it will really change the speed any, but it's cleaner in general.
Share
Improve this answer
Follow
answered Apr 28, 2014 at 15:34
TheMadTechnicianTheMadTechnician
35.4k33 gold badges4444 silver badges5757 bronze badges
Add a comment
|
|
I've a huge problem with backup of one share which contain a huge volume (10 000 000 +) small files. As far as I know total MegaBytes of those files are not so big, but the biggest problem is number of files.
First things first:
- share is more or less "regular" so there is a root directory which contain lets say 30 directories. All of those 1st level directories contain subfolder with date in format: yyMMdd.
I've created some PowerShell script to zip those directories based on date in their names, so, right now, I'm runing backup only on .zip files, but...
I've observed that script run time is everyday increasing (since this script still need to check all of folders anyway). And count of folders are increasing every day
My question is:
Is there any - let's say - marker to use it in this way:
- when script run and add directory to the archive mark today folders as "already archived", to skip those already archived folders in next script run.
That will give me everyday more or less the same time of script runtime, since it will be "check & archive" more or less the same ammount of directories which are not archived already.
Can anyone put some advice? Any idea? I'm running back of options right now.
Script is not very sophitsticated:
$zip = "C:\apps\7-zip\7z.exe"
$days_behind = -1
$folder_data = (Get-Date).AddDays($days_behind).ToString("yyMMdd")
$archive = "X:\SHARE_ARCH\Archive_$folder_data.zip"
$to_zip = (gci X:\SHARE_ROOT -Recurse | ?{ $_.PSIsContainer } | ?{$_.Name -contains ($folder_data)}).FullName
$options = "a", "-tzip", "-y", $archive, $to_zip;
$zip $options;
I think that most problematic is this line:
$to_zip = (gci X:\SHARE_ROOT -Recurse | ?{ $.PSIsContainer } | ?{$.Name -contains ($folder_data)}).FullName
|
Zip high volume of folders & files
|
0
I'm not sure to undertand what/when to ask for savepoint deletion, but..
@echo off
set "last=-1"
for /f "tokens=1 delims=: " %%a in ('findstr /l /c:"status=3" safepoint.txt') do set /a "last=%%a-1"
if %last% geq 0 (
RunWhatEver --command=DELETE_SAVEPOINTS 0-%last%
)
It searchs for the last line with status=3 and retrieves the initial number. If it is greater or equal to 0, a line is found and the script is called to remove anything from 0 to the savepoint that precedes the last full backup
Share
Improve this answer
Follow
answered Apr 24, 2014 at 12:27
MC NDMC ND
70.2k88 gold badges8787 silver badges127127 bronze badges
8
Unfortunately it seems that the script does not run the RunWhatEver --command=DELETE_SAVEPOINTS 0 - %last% command
– Ahmetfam
Apr 24, 2014 at 14:04
@Ahmetfam, obviously the runwhatever should be replaced with your script. If you have already done it, what is the result? If you place an echo %last% before the if ... line, what does it echo to console?
– MC ND
Apr 24, 2014 at 14:12
please see my code again, maybe you can see why the script dies before the for loop.
– Ahmetfam
Apr 24, 2014 at 15:42
@Ahmetfam, your OracleDatabase is a batch file. If, from a batch file, you directly call another batch file, the execution is transfered to the called batch, and does not return to the caller. You need to use call OracleDatabase.bat .... for the execution to continue on the caller.
– MC ND
Apr 24, 2014 at 15:48
@Ahmetfam, and, Why the second for loop?
– MC ND
Apr 24, 2014 at 15:50
|
Show 3 more comments
|
I start a script --command=STATISTICS --statdata=SAVEPOINTS > C:\Safepoints.txt
that generates an output like this.
Page count for each save point version:
0: version=0, status=3, ts=2014-03-18 16:24:51.764, page count=68861
1: version=1, status=3, ts=2014-03-18 17:49:25.622, page count=68861
2: version=2, status=3, ts=2014-03-19 05:00:10.467, page count=68925
3: version=3, status=2, ts=2014-03-20 14:05:53.267, page count=2744
4: version=4, status=3, ts=2014-03-20 15:08:40.607, page count=68859
5: version=5, status=3, ts=2014-03-21 05:00:10.527, page count=68926
My Idea is, to read the C:\Safepoint.txt and check if existing more then one file with status=3 (full backup) than keep the latest file, else start a new command like this
--command=DELETE_SAVEPOINTS 0-4
I modified the script as follows but it stucks before the
for loop after > %mytempfile%
@echo on
set last=-1
set mytempfile=%TEMP%\%random%.out
%ORACLE%\bin\OracleDatabase.bat --dbtype=ORACLE --database=orca --hostname=test.ora.db --port=5645 --user=sa --password=***** --command=STATISTICS --statdata=SAVEPOINTS > %mytempfile%
for /f "tokens=1 delims=: " %%a in ('type %mytempfile% ^| find "status=3" ') do set /a last=%%a
for /f "tokens=1 delims=: " %%a in ('type %mytempfile% ^| find "status=3" ') do (
if %last% neq %%a (
%Oracle%\bin\OracleDatabase.bat --dbtype=ORACLE --database=orca --hostname=test.ora.db --port=5645 --user=sa --password=***** --command=DELETE_SAVEPOINTS %%a
)
)
del /q %mytempfile%
|
Batch script that delete old Backups and keep just one
|
0
To import your database,
open it with your text editor like NotePad++
locate (should be at the beginning of the file) and erase the line:
"CREATE DATABASE yourdbname"
Save it
Now try to import it again
Also, it seems like you have only one database called Wordpress and you're using it for all your websites, right? You should make a unique name, for each database/site you create. You can still do that for your current site and create a new database, import your tables, and edit your wp-config.php file to communicate with the new database.
Share
Improve this answer
Follow
answered Apr 22, 2014 at 17:47
Kimberley FursonKimberley Furson
48711 gold badge77 silver badges1717 bronze badges
1
I took a closer look at my backup.sql file and there's a "CREATE DATABASE" line for each database I have! (I had 5 uniquely-named databases). I bet when I exported my backup, it was for all my locally-hosted databases and not just the single one I'm working on now.
– Jenny Erb
Apr 22, 2014 at 19:11
Add a comment
|
|
I'm unable to restore my site database and need help figuring out why. The Wordpress site is on my local computer.
Here's what I did:
• Exported the backup file in a zip format
• Made updates to plugins through Wordpress platform
• Site crashed from bad update; tried to delete that plugin and made things worse
• Decided to restore backup; tried importing through phpmyadmin; came back with errors
• Dropped all the tables from that database
• Unzipped by backup file; commented out "create database" line
• Tried importing again and came up with the following errors:
#1007 - Can't create database 'Wordpress'; database exists
I've built several websites locally in the past so yes, there's already another database called Wordpress. I'm not sure how to delete them besides dropping the tables.
I'm losing hope that I'll ever be able to restore this site.
Any insight?
|
Restoring site using .sql backup import
|
Got it - Apple Configurator
The device back-up feature can be used to restore and back-up an app along-with its data.
More is here
|
The question is not about programming. It is related to back-up of an iOS enterprise app(not AppStore) and installing it to multiple devices.
Can we back-up an enterprise app along with its Document-Directory and then port that backed-up app to the multiple iOS devices along with the data in Document-Directory?
I can do the first part easily I guess, but when it is coupled with the second part then it seems tricky.
Please share any information regarding this. Would be very helpful.
|
How to backup an iOS enterprise app and install on multiple devices
|
0
What does the file mod timestamp of temp-21331.rdb say? It sounds like a leftover from a crash.
You can delete it.
The documentation is definitely correct. When rewriting, all info is written to a temp file (compressed), and when complete, the dump.rdb file is replaced by this temp-file. There should however be no leftovers during normal usage. What is important: you always need enough free disk space for this operation to succeed. A safe guideline is: 140% times the redis memory limit (it would be 200% if no compression was applied).
Hope this helps, TW
Share
Improve this answer
Follow
answered Apr 21, 2014 at 14:46
Tw BertTw Bert
3,72911 gold badge2222 silver badges2929 bronze badges
Add a comment
|
|
Document say:
Whenever Redis needs to dump the dataset to disk, this is what happens:
Redis forks. We now have a child and a parent process.
The child starts to write the dataset to a temporary RDB file.
When the child is done writing the new RDB file, it replaces the old one.
Because I want to backup whole data, I type shutdown command in redis-cli expecting it shutdown and save all data to dump.rdb.After it shutdown completely, I go to db location and see what happen that dimpr.rdb is 423.9MB and temp-21331.rdb is 180.5MB.Temp file is still exist and smaller than dimpr.rdb.Apparently, redis do not use temp file replaces dump.rdb.
I am wondering whether dump.rdb is whole db file at this time?And is it safe to delete the temp file.
|
Some confusion on backup whole data in redis
|
I restored the table from only .frm and .idb files.
Get the SQL query to create the tables
If you already know the schema of your tables, you can skip this step.
First, install MySQL Utilities.
Then you can use mysqlfrm command in command prompt (cmd).
Second, get the SQL queries from .frm files using mysqlfrm command:
mysqlfrm --diagnostic <path>/example_table.frm
Then you can get the SQL query to create same structured table.
Like this:
CREATE TABLE `example_table` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`username` varchar(150) NOT NULL,
`photo_url` varchar(150) NOT NULL,
`password` varchar(600) NOT NULL,
`active` smallint(6) NOT NULL,
`plan` int(11) NOT NULL,
PRIMARY KEY `PRIMARY` (`id`)
) ENGINE=InnoDB;
Create the tables
Create the table(s) using the above SQL query.
If the old data still exists, you may have to drop the respective database and tables first. Make sure you have a backup of the data files.
Restore the data
Run this query to remove new table data:
ALTER TABLE example_table DISCARD TABLESPACE;
This removes connections between the new .frm file and the (new, empty) .idb file. Also, remove the .idb0 file in the folder.
Then, put the old .idb1 file into the new folder, e.g.:
.idb2
Make sure that the .idb3 files can be read by the .idb4 user, e.g. by running .idb5 in the folder.
Run this query to import old data:
.idb6
This imports data from the .idb7 file and will restore the data.
|
I am trying to restore a database in PMA but only have access to frm and ibd files - not the ib_log files which I understand you need.
I know I may not be able to recover the database data but is it possible to recover the structure of the tables from the frm files?
|
MySQL data migration using .frm .ibd files [duplicate]
|
You have to run this from your Windows command line, execute cmd from the start menu, navigate (cd c:\your\path\to\neo) to the Neo4j installation directory and run it from there.
Also I think at least on Windows it's called Neo4jBackup.bat (but I'm not sure, having no Windows)
|
I'm trying to run an Neo4j-database online-backup. I am using a Windows 7-machine and Neo4j enterprise 2.0.1.
I'm pretty new to all that database stuff, so I need pretty precise advice.
So far I have tried various steps to run the backup:
I created a clear directory for the backup (C:\Users\Tobi\Desktop\neo_backup)
I typed the following statement into the Neo4j command box: ./neo4j-backup -from single://localhost:7474 -to C:\Users\Tobi\Desktop\neo_backup.
But, despite the red help box dropping down, nothing happens. I also tried some slightly different statements (i.e. using the IP-address etc.)
What am I doing wrong? Could someone give me some advice?
|
Neo4j enterprise backup Windows 7
|
0
This solves the space issue, and should copy the files including folders.
@echo off
cd\
xcopy "C:\test back\*.*" "D:\new" /s/h/e/k/f/c
Share
Improve this answer
Follow
answered Apr 9, 2014 at 6:42
foxidrivefoxidrive
40.7k1010 gold badges5656 silver badges6969 bronze badges
Add a comment
|
|
i want to backup file from C:\ to D:\ but i have some problem about the name of folder .name of folder is "C:\test back" and "D:\new"
this is my code
@echo off
cd\
xcopy C:\test back D:\new
a error is invalid number of parameter.
when i change name of folder test back to test_back it still ok
xcopy C:\test_back D:\new
Can you tell me why and how can i do to batch file xcopy if name of folder have space bar.
Thank you . i'm new to backup file.
|
invalid number of parameter Backup file
|
0
No there is no way you can do this if the data file is greater than 10GB.
The size limit is for the data file(log file excluded).
The database size limit was raised to 10GB in 2008 R2 and 2012.Remains the same in 2014 too. This 10GB limit only applies to relational data, and Filestream data does not count towards this limit (http://msdn.microsoft.com/en-us/library/bb895334.aspx)
But if you were trying to take a backup and restore a database with the datafile larger than 10GB it will not work.
That would probably be one the best hacks if it could be done :)
Share
Improve this answer
Follow
answered Apr 7, 2014 at 14:40
Angel_BoyAngel_Boy
98822 gold badges77 silver badges1616 bronze badges
2
Ah, I see. I suppose what I was trying to ask was is there another solution that doesn't use Microsoft SQL Server Studio Management? Perhaps a free third-party tool? Obviously, I realize that this would be a long shot..
– Vincent
Apr 8, 2014 at 4:54
Apparently there isn't any.
– Angel_Boy
Apr 8, 2014 at 5:04
Add a comment
|
|
I am currently using Microsoft SQL Server Studio Management Express to restore MS SQL backup file databases. Although the majority of my files are less than 10 GB, there are some that are greater.
Is there a possible solution to restore the files greater than 10 GB for free?
Thank you all for your help!
|
Is it possible to restore MS SQL backup databases that are in excess of 10 GB for free?
|
0
I'm not aware of any way that can help you do that. however you can do a select on all your tables, put them in a CSV,XML or JSON file and then compress them.
Share
Improve this answer
Follow
answered Mar 29, 2014 at 20:44
Hannoun YassirHannoun Yassir
20.9k2323 gold badges7979 silver badges113113 bronze badges
Add a comment
|
|
I have an app with SQLite database. I want to add an oportunity to do backup/restoration of database. I see a lot of answers about backup to SD card. But what if a device has no SD card? How to implement backup? Or maybe I can send database.db file to email, that user pointed, and download it from there when it's needed and add to application?
What can you advice?
|
Backup SQLite database to email?
|
0
the easiest and lazy-est way to do this, IMHO, is to write a script and then a cron job to run that script.
e.g.
cd /path/to/files
mv filename filename`date +%y%m%d`
find /path/to/files* -mtime +5 -exec rm {} \;
in some script file, where filename is the name of that one file being generated then add an entry into e.g. /etc/crontab
the find that executes the rm will delete files over 5 days old. then you get 5 days worth of backups.
probably a lot easier then requesting a feature from the devs of the program, trying to modify it yourself, etc. (unless there is some already developed feature...)
Share
Improve this answer
Follow
answered Sep 17, 2014 at 20:52
Moe SinghMoe Singh
83988 silver badges1212 bronze badges
Add a comment
|
|
I am using OpenVZ Web Panel to manage my virtual machines. For some reason, OVZ Web Panel's "Daily Backup" option will only store a daily backup of each virtual machine. I have configured the Backups to keep to more than 1 under the User's Profile settings - setting it to values higher than 1 and "unlimited" - but the setting is ignored, and only 1 backup copy is rotated every morning. I need at 7 daily snapshot backups for each virtual machine.
Anyone know how to let it store more backup copies? I have searched forums, but nobody else seem to have this issue. The documentation is also not clear about this. I have changed the owner of the virtual machine, restarted OWP - but still no luck.
|
OpenVZ Web Panel Daily Backup
|
0
Strongly recommended here use of logrotate utility available on most of *nix distros. It has following options of your interest:
compress
Old versions of log files are compressed with gzip by default.
dateext
Archive old versions of log files adding a daily extension like YYYYMMDD instead
of simply adding a number.
notifempty
Do not rotate the log if it is empty (this overrides the ifempty option).
rotate count
Log files are rotated times before being removed or mailed to the address specified
in a mail directive. If count is 0, old versions are removed rather then rotated.
Share
Improve this answer
Follow
answered Mar 15, 2014 at 12:38
anubhavaanubhava
771k6666 gold badges582582 silver badges649649 bronze badges
2
Thank you for your answer but I need to make a bash script :(
– Eloygplaza
Mar 15, 2014 at 13:47
logrotate is used directly in BASH script only.
– anubhava
Mar 15, 2014 at 14:08
Add a comment
|
|
I have a ubuntu bash script that makes a .zip file of the home directory of that user, for example admin, the name of the zip is like this YYYY_MM_DD_HH_MM_backup_admin.zip but for example after that I make another backup of the user admin2.
Those files are going to the folder /home/admin/files_zip, then with that I want to make a script that delete all the old backups of the same user and only save the newest one.
PD: sorry for my bad english :(
|
ubuntu bash script to make backup of /home/$user but keep the newest one
|
Backup to external drive since Google drive is not very safe. Thanks to arkascha!
|
I have been using Tortoise SVN and VisualSVNManager to create local svn repositories for my projects. Is there a way to back up those repositories to Google Drive? I know that you can use
svnadmin hotcopy path/to/repository path/to/backup --clean-logs
to back up locally. Could I use this to back up to Google Drive? What would the path to Google Drive be?
Thanks!
|
Backup local svn repository to Google Drive
|
0
Have you looked at Azure storage redundancy options? The geo redundant option might solve your replication need.
http://blogs.msdn.com/b/windowsazurestorage/archive/2013/12/04/introducing-read-access-geo-replicated-storage-ra-grs-for-windows-azure-storage.aspx
Share
Improve this answer
Follow
answered Mar 10, 2014 at 15:54
CSharpRocksCSharpRocks
7,07511 gold badge2121 silver badges2727 bronze badges
2
My problem is that the customer wants to have a copy, maybe on their local server. I have found something called StorSimple that may do what I am looking for. But I have been pulled off this for now.
– jagdipa
Mar 13, 2014 at 16:53
We have a similar case. Redundancy doesn't really solve this case as data might be affected on one site and it will be synced with other nodes. The best way is have something similar to AWS' cross-bucket replication.
– Anton Zorin
Apr 3, 2017 at 8:17
Add a comment
|
|
I have a Azure website that allows customers to upload documents. There are a lot of documenet (~200Gb so far).
I need a way to backup the documents to another location (Azure or other location), or have live replication to a server. Is there anything I can use that will do this?
|
How to backup/replicate Azure store onsite
|
0
How often do you think the backup script will run?
It should not be possible to run multiple instances in the same time,
correct? Would setting this up a cron work? (nightly, or two times a
day)
The backup script should be a Shell script.
The generate-report action would somehow trigger that script. Either execute command on the server, or change a flag in the database that a cron can pick up.
The shell script would only allow one instance and you will have a database table with the status of the backup (percentage and other details).
So the flow will be:
access generate-report: no instance is running, to the backup shell starts
the shell script constantly updates the percentage completed field in the table
the generate-report is accessed every few seconds and reads the percentage (because the shell is already running)
Share
Improve this answer
Follow
answered Mar 7, 2014 at 9:09
cornelbcornelb
6,05633 gold badges2020 silver badges3030 bronze badges
Add a comment
|
|
What I basically want to do is have my php web application click on button ( excel report generator) that may take a few minute and immediately return control to the page process terminate.
So here's the workflow:
User clicks 'generate Report' button
ajax call made to '../city/generate-report' and returns immediately
process is started and runs until completion while the user can then go about his business
User can return to report page and see progress: "Report 50% complete"
What's the best way to accomplish this? Brief answers are fine. I don't want code written for me, just some guidance. I looked at shell_exec but I'm not sure exactly if that is the best way or if it is, how to use it to process functions within a web app. (cakephp 2.0 framework if that makes any difference). Thanks.
|
using background process take large db back up in cakephp?
|
I think the most scalable method for you to achieve this is using AWS Elastic Map Reduce and Data pipeline.
The architecture is this way:
You will use Data pipeline to configure S3 as an input data node, then EC2 with pig/hive scripts to do the required processing to send the data to SFTP. Pig is extendable to have a custom UDF (user defined function) to send data to SFTP. Then you can setup this pipeline to run at a periodical interval. Having said this this, it requires quite some reading to achieve all these - But a good skill to achieve if you for see future data transformation needs.
Start reading from here:
http://aws.typepad.com/aws/2012/11/the-new-amazon-data-pipeline.html
Similar method can be used for Taking periodic backup of DynamoDB to S3, Reading files from FTP servers, processing and moving to say S3/RDS etc.
|
I have a s3 bucket with about 100 gb of small files (in folders).
I have been requested to back this up to a local NAS on a weekly basis.
I have access to a an EC2 instance that is attached to the S3 storage.
My Nas allows me to run an sFTP server.
I also have access to a local server in which I can run a cron job to pull the backup if need be.
How can I best go about this? If possible i would like to only download the files that have been added or changed, or compress it on the server end and then push the compressed file to the SFtp on the Nas.
The end goal is to have a complete backup of the S3 bucket on my Nas with the lowest amount of transfer each week.
Any suggestions are welcome!
Thanks for your help!
Ryan
|
Script to take a S3 bucket, Compress it, push the compressed file to an SFTP server
|
Make use of RMAN and define your retention policy.
Backup tablespace trans_pkp using something like (for as long as you like to keep it)
BACKUP TABLESPACE TRANS_BKP KEEP FOREVER NOLOGS TAG 'FIRSTHALF2014;
Truncate table transition_bkp
Restore:
Make use of TSPITR (Automatic Tablespace Point-in-Time Recovery)
Utilize RMAN duplicate from backup and set until time
You might want to use a naming convention for the tablespace.
Like TRANS_BKP_01_2014, TRANS_BKP_02_2014, etc...
|
In Oracle 11.2 DB I have:
- transaction table in tablespace users and
- transaction_bkp table in trans_bkp tablespace
Transaction table holds data for 1 month and transaction_bkp should hold data as long as possible.
Problem is that trans_bkp tablespace becomes full after 6 months.
Idea to resolve this problem is to backup trans_bkp tablespace every month and then truncate table transaction_bkp.
How to do this backup?
If customer will need some specific data in past how can I delivery them.
|
Oracle backup and recovery tablespace
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.