Go Back   FlashFXP Forums > > >

Suggestions Post suggestions for upcoming versions

 
 
Thread Tools Display Modes
Old 02-15-2005, 12:53 PM   #1
enricomauri
Junior Member
 
Join Date: Feb 2005
Posts: 5
Default FlashFXP need a AUTORESUME /SMART RESUME / SMART OVERWRITE and SYNCHRONIZE features !

Hi !

FLashFXP is nice, but lacks some important (by my opinion, VITAL) options:

(1) a concrete SYNCHRONIZE command with definable rules

(2) an AUTO RESUME - SMART RESUME feature

(3) a SMART OVERWRITE feature

(4) Multi-part download for big files (like Getright and similar programs). This request speaks for itself but would be a real lifesaver with really huge files.


Let's see everything in details:

1) you surely know other FTP clients has this option, but its usefulness depends on the sets of rules you can define. Without the next two options, it may be senseless.

2) AUTORESUME: when something goes wrong and my download is interrupted for any reason, the file on the PC is incomplete. Now I can use the resume function to resume the download, but what happens if some hours or days passed and the file on the server is changed ? If the new file on the server is equal or bigger size than the previous one, when you are going to resume (because the file you have locally is shorter) you will obtain a completeley useless file (because the first part is from the first version of the remote file and the second is from the new one). Sure, you could trust time/dates, but sometimes they are unreliable. The rule of FXP is "if local file it's shorter, you can resume, overwrite or skip". Too SIMPLE ! You should be able to say something as:

"if I'm downloading a file and it gets interrupted, please just try to download it again xxx times until completed, using an AUTO-RESUME function because it help to avoid a total download of the interrupted file, BUT you should check that the file you are trying to download hasn't changed since you started the download operation (here you could use the server dates, the dimension, XCRC, SFV file checking and so on). "

The necessary information about this operations should be saved in the queue itself, so when you load it back you would know the dimension/date/time of the remote file that you started to download and you can check this info versus the remote site to confirm that the remote file has changed or not. If everything is like in the previous session, you should just retry the download resuming it. Eventually, you could try the SMART-RESUME functionality I describe later on.


SMART-RESUME: Let's suppose another situation: you have a local file which is shorter than remote file and you suspect it could have been cut off during a transfer. Dates are the same or not, it doesn't matter or you can't trust them. You would try to resume the file, but how can you be sure that in the server there is really the very same file that you started to download ? A possible solution would be to download the first and last KBytes of the remote file (not ALL the file, just some leading and trailing KBytes)
and compare them to the corresponding part of the local file itself. If they compare, bingo ! Probably you are safe to resume it (at least, you should try to). I do not say that it's a perfect solution, but it would help a lot in the situation described if you handle a lot of big files via FTP. For sure, if the checks fail you are completely sure that you have to redownload the entire file (full overwrite). An automation in this sense would be a great addition to FlashFXP. It's useful too if something went wrong during the saving of the last chunk of the local file, because you could immediately spot the difference and decide what to do (ideally, the FTP client could be asked to cut the last KBytes of the file, the ones that tipically get trashed are 4/8 KBytes, and then try again to check against the remote file itself).



3) SMART OVERWRITE

Then, imagine another possible and similar situation (specular to the SMART-RESUME one): you have downloaded some medium/big files (let's say files of 10 MBytes or more) in the past days and you want to be sure they are the same that resides on the remote server. You see different dates on the remote files, but size is the same: what are you going to do ? Well, especially if the files are ZIPs or other types of archive (but it may work on a lot of other file types too, it depends...) you could just download the first and last 32 Kbytes of the remote file to check them against the same parts on the local file. If the checks fail, again you are sure that the file must be fully redownloaded (and cannot be resumed!). Tipically, the different files takes place in the queue with the flag "OVERWRITE THE LOCAL FILE".



4) well, Multi-part downloads are an old feature for other FTP clients, but sometime the are not working well. Maybe FlashFXP people can do it better? I hope !



That's all folks !
Should you need some help in defining the possible user interface for this modifications, I'm willing to help.

Thanks a lot for your time and attention,

EM
enricomauri is offline  
Old 02-15-2005, 12:57 PM   #2
enricomauri
Junior Member
 
Join Date: Feb 2005
Posts: 5
Default extra details on previous post....

It would be definitely important to have the "RESUME"/ "OVERWRITE" /"SKIP" options defined for the single files themselves and not for the whole download operation.

Because, you know, for some files which are really large it would be a good idea to use the smart-resume or smart-overwrite feature, while for the others could be a real overkill (too slow and bandwidth consuming)

Regards,

EM
enricomauri is offline  
Old 02-15-2005, 06:14 PM   #3
MxxCon
Super Duper
FlashFXP Beta Tester
 
Join Date: Oct 2001
Location: Brooklyn, NY
Posts: 3,881
Default

enricomauri, did you actually try to use flashfxp or even looked into help file before writing this request?

go to "Options", "File Exist Rules". if you properly configure settings there, you'll get your "AUTORESUME /SMART RESUME / SMART OVERWRITE"

as for multi-threaded downloads...
just search msgboard
__________________
[Sig removed by Administrator: Signature can not exceed 20GB]
MxxCon is offline  
Old 02-16-2005, 04:28 PM   #4
enricomauri
Junior Member
 
Join Date: Feb 2005
Posts: 5
Default other explainations...

mxxxcon,

The search function gave me a "404" error yesterday, so I couldn't try it.

About your answer, I think it's not as you say. I'm trying the last beta (3.1.10 build 1067) and there you can't select the behaviour for files with different dates compared to the ones locally ("file existing" dialog...), nor can you force a content check (at least for big files would be useful) to ensure proper resuming (ie, assuring the file on the disk is really the first part of the one on the server). The option for the rollback isn't enough because it's only curing the eventual network problems which cause corruption at the end of a badly-downloaded file. Good, but not enough because it' just a rollback.

If you read my post again carefully, you'll see my proposed features are inherently different from the ones available, because give the user real control over the behaviour of the FTP program.

If necessary, I can give you a full schema for the user interface.

Regards,

EM
enricomauri is offline  
Old 02-16-2005, 05:07 PM   #5
MxxCon
Super Duper
FlashFXP Beta Tester
 
Join Date: Oct 2001
Location: Brooklyn, NY
Posts: 3,881
Default

looking at a file's content to make sure it's 1st part that's going to be resumed isn't a good way to go.
how many bytes are you going to check. 4kb, 8kb, 16kb, 10% of a file?
even if they pass the test, file can still be different, and if resumed, whole file will be trashed.
if anything, XCRC feature should be used.
server date is also unreliable becuase server can change it's timezone or 'touch' files during reindexing.
so all we need is just support for XCRC feature and extention of existing "file exist rules" to consider results of XCRC command.
__________________
[Sig removed by Administrator: Signature can not exceed 20GB]
MxxCon is offline  
Old 02-17-2005, 03:57 PM   #6
enricomauri
Junior Member
 
Join Date: Feb 2005
Posts: 5
Default

Mxxcon,

It's true it could be a pain in the ass to download some part of all the files. Surely I would prefer XCRC, but it's not so common nowadays. My checks were to be used on the ones you are doubtful about.

Just having the chance to check first and/or last Kbytes (how many should be a user decision, but in a zip file 4-8Kbytes are enough normally) is just a "simple" workaround to have a better chance on downloading exactly what you want in the fastest way possible. When you make a change to an archive, normally the first KBytes are indeed changed.

But I think you understood my claim. No sense to go forward in this direction.

For the dates, you're partially true. Dates are - sometimes - not correct for some servers. But, if you have a server where they ARE quite often available and reliable, why don't give the user a chance to exploit their use ? As I said before, this options are common in other products that are trying by now (3DFTP, Core FTP, and so on) but they are less stable than FlashFXP.



So, in practice, what I would like to have in the user interface ?




_ RESUME (is unavailable [grayed out] if OVERWRITE ALWAYS is checked)

(only to be considered if the local file is shorter than the remote one, obviously)



_ NORMAL RESUME (simply resumes shorter local files, no other checks)



_ SMART RESUME (before resuming shorter local files, makes some content checks)



_ USE FOR FILES BIGGER THAN XX Mbytes



_ USE XCRC if available on FTP server



_ Checks XX Kbytes at start (user must input XX parameter, otherwise defaults to 32)



_ Checks YY Kbytes at end (user must input YY parameter, otherwise defaults to 32)



IMPORTANT:

**** if these checks fails, the file is not resumable and it must be OVERWRITED, so the program must simply switch to OVERWRITE ALWAYS for the current file ! ****







_ OVERWRITE



_ ALWAYS (no resuming, no other checks, just overwrite local file; used also if the SMART RESUME feature said itfs impossible to resume the local file)



_ ENFORCE SIZE CHECK:

(overwrite only if at least one of these is checked and true)



_ OVERWRITE IF LOCAL SIZE IS DIFFERENT



_ OVERWRITE ONLY IF LOCAL FILE SIZE IS:



_ larger than remote file size

_ same as the remote file size

_ shorter than remote file size





_ ENFORCE DATE/TIME CHECKS:

(overwrite only if at least one of these is checked and true)



_ OVERWRITE IF LOCAL DATE/TIME IS DIFFERENT



_ Use XX hours of tolerance to avoid problems for time zones (user must input XX value, otherwise default to 1)



_ OVERWRITE ONLY IF LOCAL FILE DATE/TIME IS:



_ lower than remote file date/time

_ equal to remote file date/time

_ greater than remote file date/time



_ SMART OVERWRITE

(warning: this can be really slow, use only for BIG and relevant files)

(if any difference is spotted, file must be re-downloaded from scratch)



_ USE SMART OVERWRITE FOR FILES BIGGER THAN XX Mbytes ONLY (user must input XX otherwise default to 1)



_ USE XCRC if available on FTP server



_ Checks XX Kbytes at start (user must input XX which defaults to 32)



_ Checks YY Kbytes at end (user must input YY which defaults to 32)







_ SKIP is the default option here, of course. If no condition of the defined options is true, the file is simply skipped because it already exists locally.







I think my schema should be easy to understand.



It would be nice to have some samples on-screen of whatfs going on based on your choices (a sample of a possible situation which tells you that with the defined options a file would be transferred, another not, and so on)



I would just live better if I had that options.

Regards,

EM
enricomauri is offline  
Old 02-25-2005, 01:43 PM   #7
andreag
Junior Member
FlashFXP Registered User
 
Join Date: Feb 2005
Posts: 4
Default Corruption for resumed downloads... that's why happens in some cases !

(I re-post this here because before it was on a restricted part of the forum, so everyone can read this now)

Consider all this as an "official" suggestion from a registered user.

Now I understand (thanks Bigstar) that the "rollback" feature is only for cutting the last KBytes that may be fake data if something went wrong during a download and connection dropped.

I use FlashFXP to copy a lot of big/huge files (1 GB - 2 GB) but also small files from remote FTPs. Sometimes, I got strange corruption problems in very large files. Now I undestand how it all happened. I wish it's possible to avoid this chance in the future.

I think that it could be enough to have the possibility to check at least the last 3-4 KBytes of the downloaded file comparing them to the corresponding 3-4 KBytes of the remote file, eventually only for the files where it can really have sense for the user (I should choose by myself which ones using name, extensions, size or date values), but if you follow my blueprints probably it could be used always for files of longer than about 10-20 KBytes.

Maybe a higher value for this "intelligent rollback and check" could be useful or maybe not. I don't know, but would let the user decide.

It could be done in this way: when you're resuming, you have in any case the need to open the remote file and the local one. It would be enough to open the remote one some KBytes "before" than the real needed restart point, put the downloaded data in a memory buffer/structure (... I used to be a programmer years ago...) without writing anything on the disk (thus avoiding to ruin the local file which could just be a previous and valid shorter version of the remote file) and then finally comparing this data with the one from the corresponding part of the local file. If the parts are identical, then it's probable (but not completely sure) you can safely resume. To avoid possible data corruption problems, I would avoid to check the last KB of data saved in the local file (or consider to enforce the "rollback" feature in this case too). If the compared parts are different, the files are different versions with the same name, so user should choose to overwrite local file or to rename it or maybe to change the name of the one he's downloading. For sure, he must not resume at all !

In this way, you would avoid the "broken" file and a second (full) download that you will need to do when you realize that the file is broken. With big files, it may save hours of connection and bandwidth.

I hope to have been clear in my explaination, otherwise tell me.

Then I read the "queue file format definition" to understand how it's inside, and I completely agree (now I finally understand his point of view !) with the guy that signs messages as "EM" that wrote so many things about the format.

If it's true that some sites may be reliable for file dates, we should have them (dates/timestamps) included in the queue file too. They could be useful for spotting possible changes even if the filezise is the same (that happened to me too ! I had to redownload a lot of files in a hurry because content/version was different but size was exactly the same)

Maybe "EM" asked too much, but he's definitely right. As things are now, we all have some risks to get corruption on files which is really difficult to spot until is too late. And it already happened to me...

About queue saving: my opinion is to let us users choose what to do. I'm doing well with actual settings. Just let us choose the interval (for me would be "every file", for others "every minute" if they're doing many fast and small transfers).

I would like to know the opinion of the other REGISTERED users about all these matters, but in any case I hope to see some of these options implemented in the near future.

At least the one of the resume check on the last KB of the files. From a (ex) programmer point of view, I think it should be easy enough to implement.

Let me know what developers are thinking about alll this.

Move to the proper /other thread if necessary.

Thkx + Greetz,

Andrea
andreag is offline  
Old 02-25-2005, 01:58 PM   #8
bigstar
FlashFXP Developer
FlashFXP Administrator
ioFTPD Beta Tester
 
bigstar's Avatar
 
Join Date: Oct 2001
Posts: 8,012
Default

to test the file integrity, would require a test at the end of the file. the best solution would be during the rollback, however the corruption could of occured when the connection was lost and the rest of the file might be ok. so if the integrity check is done on the data that is going to be replaced, if the data doesn't match and we overwrite the data the user could end up loosing GBs of data that was in fact flawless.

If we were to test in any other part of the file it would require starting/stopping and then resuming from the end of file, which could lead to another problems.

Future versions of FlashFXP will support XCRC which will allow us to validate the download to insure it's identical to the copy on the ftp server.

Starting with build 1070 the "Auto-Save on change" referring to the auto-generated queue file, saves are now spaced out to eliminate performance lost when transferring lots of small files. The original method of always saving after a transfer was probably not the best way to do this, simply because saving so often can result in the queue file being corrupted if windows crashes/blue screens at the exact same time. Which is one of the reasons no new option was made to allow saving after every transfer.
bigstar is offline  
Old 02-25-2005, 02:58 PM   #9
andreag
Junior Member
FlashFXP Registered User
 
Join Date: Feb 2005
Posts: 4
Default Resume issues.... a practical example !

Sorry for the double post, something got wrong here with Internet Explorer and the old session

About your answer:

>
>to test the file integrity, would require a test at the end of the >file. the best solution would be during the rollback, however the >corruption could of occured when the connection was lost and >the rest of the file might be ok. so if the integrity check is done >on the data that is going to be replaced, if the data doesn't >match and we overwrite the data the user could end up loosing >GBs of data that was in fact flawless.
>

Good point. That's why you should ENFORCE the rollback for at least 1-2 KBytes (maybe more ? You have more experience than me to understand) and then (just an example) checking only the last 4KBytes of the "rollbacked" file (so the "rollbacked" data won't be compared !)

Example:

I transfer file "A" from FTP server (2000 KBytes long), but something it breaks after 1000 Kbytes and the transfer stops.

File "A" on the client is 1000 KBytes, on the FTP server is 2000 KBytes.

Then, when resuming, I take 2 Kbytes (maybe more ? maybe it's better to simply use the "rollback" parameter we already have in FlashFXP) off the local file for rollbacking and thus avoiding errors due to network problems. Now the local file size "A" would (logically) be 998 KBytes, supposed free of trailing corruption. Now I can take last 4 KBytes of local file "A" (from 995th to 998th KByte) and put them in a small memory buffer (an array of bytes). Then ask to the FTP server to initiate a restore of the REMOTE file "A" from position of KByte 995 and read the first 4 KBytes in another memory buffer. At this point, I have simply to compare the 4 KBytes in the memory buffer from LOCAL "A" file with the one from the REMOTE "A" file. If the compare is OK, than it's maybe Ok to continue with the resume (at this point, memory buffers get written on the disk). If it's not,due to the rollbacking operation that cut out eventual network errors, we should overwrite because files are different for sure (if we had used enough rollback KBytes to avoid trailing corruption).

If you follow this logic, you can't go wrong. With this simple addition, you won't take up wrong resume operations, at least in situation like mine (where It happened to me a lot of time for sure).

Moreover, if you could have "remembered" the REMOTE size of file "A" when you started to download it, if the file had simply changed the dimensions then it could be used as an alert that the resume operation is pointless at this time, but I think that it shoud be let to the user to decide about it. (if the remote file changed its size it's there is a concrete doubt about the fact it's the very same file than the previous download session).

Understood ?

Let me know !

>
>If we were to test in any other part of the file it would require >starting/stopping and then resuming from the end of file, which >could lead to another problems.
>


True, but even so it could be useful for very big files and archive files to spot immediately the "directory" differences (the list of file is often at the very first KBytes of archives)

But my "system" explained before isn't requesting more "FTP" commands than the normal situation if the resume operation can really go on. It's obvious that if the buffers are different I have to overwrite the file, so I must restart from the first byte, but it would be the right choice in that case.

>
>Future versions of FlashFXP will support XCRC which will allow >us to validate the download to insure it's identical to the copy >on the ftp server.
>


But only a little number of FTP sites seems to support it... But having that will be surely helpful.


>
>Starting with build 1070 the "Auto-Save on change" referring to >the auto-generated queue file, saves are now spaced out to >eliminate performance lost when transferring lots of small files. >The original method of always saving after a transfer was >probably not the best way to do this, simply because saving so >often can result in the queue file being corrupted if windows >crashes/blue screens at the exact same time. Which is one of >the reasons no new option was made to allow saving after >every transfer.
>

Good point here too !
But then how do you save the information that a particular file is downloading at the moment ? How eventually could you save the information about remote file (size, time, date and I would suggest too the CRC of the first 4-8 Kbytes of both local and remote file that would be really useful to spot immediately different archives).

Ok, ok, maybe I asked too much... but the simple double memory buffer trick, done as I described before, would really solve a lot of problems for me (wrong resume that corrupt the files and force a second full download of the remote file because it has changed in the meantime between subsequent download sessions)

Thanks a lot.

Andrea
andreag is offline  
Old 02-27-2005, 06:07 PM   #10
enricomauri
Junior Member
 
Join Date: Feb 2005
Posts: 5
Default

Andreag,

I finally see that someone ( you ! ) undestood very well my point of view. I had the same strange corruption in the past, that's why I needed to see what was going on in detail.

Finally seems I'm not alone here in my quest for having a better FTP client available.

I tried even to design a possible user interface for the options I suggested, but in the post it's not very-well rendered, but it should give the idea.

I think you're true, the modification for the "last kbytes" check shouldn't be too difficult to implement if the code is decently written and it solves most (not all) the possible corruption problems.

I see that you are a '''registered user'', maybe they'll listen to you more than they did with me.

In any case thanks to everyone for your reading (and appreciation).

Hope my posts were useful for making FlashFXP a better product.

Regards,

EM
enricomauri is offline  
 

Tags
download, file, flashfxp, ftp, fxp

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off

Forum Jump


All times are GMT -5. The time now is 10:55 AM.

Parts of this site powered by vBulletin Mods & Addons from DragonByte Technologies Ltd. (Details)