1 year ago

#359193

test-img

airtaki

Resume interrupted uploads via filepond

I'm using filepond to handle chunk uploads. Everything works fine, except one thing. Is there any way to continue interrupted uploads? I mean, for example, the customer started to upload a large video using mobile net, but she terminated it around 40%. Then, a few hours later, she want to continue the upload using wifi. Same file, but different browser, different IP address. In this case I'd like to continue the upload from the last completed chunk, not from the beginning.

As the documentation wrote:

If one of the chunks fails to upload after the set amount of retries in chunkRetryDelays the user has the option to retry the upload.

In my case there are no failed chunk uploads. The customer simply set the same file to upload.

Exactly this is what I'd want:

As FilePond remembers the previous transfer id the process now starts of with a HEAD request accompanied by the transfer id (12345) in the URL. server responds with Upload-Offset set to the next expected chunk offset in bytes. FilePond marks all chunks with lower offsets as complete and continues with uploading the chunk at the requested offset.

During upload, I send a custom header with a unique hash identifier of the file/user id, and store it in the db. When the customer wants to upload the same file, and there is an uncompleted version already uploaded, I can able to find it and send back an Upload-Offset header. This is clear for me. But I couldn't ask filepond to send HEAD/GET request before start the chunk upload, to get the correct offset. It always starts from zero.

I already checked this question, but my case is different. I don't want to continue a paused upload, I'd like to handle an abandoned but later re-uploaded file.

As I see the filepond.js (4.30.3) source code, I can create a workaround, simply add value to state.serverId. In this case the requestTransferOffset will fired, and continues the upload from the given offset.

        // let's go!
        if (!state.serverId) {
            requestTransferId(function(serverId) {
                // stop here if aborted, might have happened in between request and callback
                if (state.aborted) return;

                // pass back to item so we can use it if something goes wrong
                transfer(serverId);

                // store internally
                state.serverId = serverId;
                processChunks();
            });
        } else {
            requestTransferOffset(function(offset) {
                // stop here if aborted, might have happened in between request and callback
                if (state.aborted) return;

                // mark chunks with lower offset as complete
                chunks
                    .filter(function(chunk) {
                        return chunk.offset < offset;
                    })
                    .forEach(function(chunk) {
                        chunk.status = ChunkStatus.COMPLETE;
                        chunk.progress = chunk.size;
                    });

                // continue processing
                processChunks();
            });
        }

...but I think this is NOT a clear way.

Was anybody facing this issue yet? Or do I missed anything, and is there a simplest way to continue interrupted uploads?

file-upload

upload

chunks

filepond

0 Answers

Your Answer

Accepted video resources