Home Suggestions & Bug Reports
This forum is intended for MultCloud Support Discussion.

More Detail and JSON Error Reporting (and CSV maybe) #suggestion

Hi, for those of us that utilise this for professional means, when there is a failure, it's unclear about what else has been done and not done, often the amount of files transferred will actually be more than the reported amount in the detailed view.

What would be useful would be a FULL report of what was and what wasn't transferred so that we can manually handle the issues.

I've been making some very good progress with both Power Automate, and the Google API, for example, with handling failed transfers.

I think that creating an array with error reason(s), and then a report on each file with some file metadata (a sub-object containing id, size, name, extension, original mimeType, inferred mimeType), file destination metadata (similar fields in that sub-object), copy status for each file, and top level destination metadata, and whatever else you can smoosh into it. Then you'd be giving us a fighting chance of using this to better perform business critical functions. :)

Comments

  • @Eliotcole, Thanks for your suggestion. But, it might be difficult to create a report for every file. If there are too many files, it might need to take a long time to create report for every file. And, it also needs more space to save these reports.
    Generally, when the transfer fails, you can view Logs and check the failed files.
  • edited January 2023
    Hi, @Admin, I would then say that it really makes it basically impossible to use Multcloud for:
    • Important file transfers
    • File transfers for others
    • File transfers for work related or professional means
    This is because if we can't have a full breakdown of the status of each failure, then realistically we can't have confidence that Multcloud has performed the job adequately.

    I don't believe that this should be overtly onerous to achieve, but I do understand that not everything is simple. Still, let's take an example ...

    EXAMPLE

    I'm going to copy a Google Folder to a SharePoint Document Library, some of the details are:
    • 35,000 files
    • Root folder has 35 files and 20 sub-folders
    • Folders go very deep
    • Some Google Workplace documents
    • Some __MACOS folders (which have files beginning with '.')
    • Some folders that SharePoint/OneDrive doesn't like ('COM', etc)
    • Some files and folders with characters that SharePoint/OneDrive doesn't like ('|', '*', etc)
    Now, if I just copy the main root folder into SharePoint the Multcloud error reporting that should definitely occur with that setup will sometimes:
    • Not show all errored files
    • Not show every file that failed
    • Sometimes show a higher amount of files moved out of the total originally available
    In all of these situations how can I be sure of what is transferred over and what isn't? Plus in the last case I don't even *know* what's going on there. :#

    Again, if you're trying to copy work related files, you NEED to be sure they've copied, and (if not) which exact files have not copied so that you can affect manual maintenance on the few files in question.

    Like with the file renaming suggestion; if I've been able to affect this myself on limited resources, I reckon that it should be doable for you guys in a future version of the service.

    HOW TO APPROACH?

    So in this example, I'm going to make an assumption that the process has the ability to keep an array variable of everything that has failed, and to keep the relevant data from the source cloud (in this case, Google). Even then, there's a way to approach this that is ... differently ... efficient.

    You could literally just keep the id of the failed file and the destination id in the array along with the cloud drive type of each. Then, once the transfer has completed, perform an API GET request to both APIs (in bulk where supported) to build a more full report for the end user. :)

    EDIT
    Just to say that sometimes (not always) it will actually show the individual files being moved, which indicates to me that it is able to take note of such things.
    Incidentally; that transfer did *not* finish sucessfully, despite saying that it did. I downloaded the original 3.7GB folder, and the version from the new location and it was missing a considerable amount of data in both amount of files, and total file size.

    However, to show the 'doubt' a little further, here is a screenshot of a *successful* transfer which was completed using the cloud transfer method, rather than the 'Copy to' method whilst browsing the folders.
    So it says that it was successfulm yet it processed 0 out of 105 files, 0Bytes from 38.17MB in total file size, and there's nothing to indicate anything else.

    I've made my own very poor versions of this service using APIs, and I am no way *close* to your level of skill, my friends. But even there I was able to report where there were failures, and either make a full log, rename and retry, or just retry, automatically. I realise that things take time to develop (I work in development), but equally, this is not an insult to your product. Merely a suggestion on how it could be made better.
    END EDIT

    CONCLUSION

    Hope that this helps, like I say, above, this is a suggestion, but I suppose it is also pointing out a small flaw in the system. I really like multcloud, but it's hard to recommend for professional/important use cases without full transparency.

    If we can at least know each failure, ormaybe even if there are multiple files in a single folder the failure chain of how high up and down that failure goes, then we can rely on the service for most cases, then where there are failures, we can perform manual transfers to ensure completion.
Sign In or Register to comment.