techhub.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A hub primarily for passionate technologists, but everyone is welcome

Administered by:

Server stats:

4.9K
active users

#synologydrive

0 posts0 participants0 posts today

Another Synology Drive data loss bug

Notwithstanding their recent boneheaded announcement (reported on by Ars Technica) about restricting which drives can be used in their NASes, #Synology gets most things right, but every once in a while their apps just… lose data, and it’s not clear that they care.

I’ve written before about a Synology Drive Client bug on Linux they’ve known about for years and haven’t bothered to fix. And then there’s the time one of their NASes had a gradually manifesting hardware bug that they could have notified customers about and proactively done a recall, but instead they just let customer NASes fail at which point they were forced to shell out money for a new one.

Today I’m hear to tell you about another data-loss bug in Synology Drive, and the workaround I’ve been forced to implement to avoid having it bite me (again).

Simply put, sometimes Synology Drive Client stops pulling files down from the server. When this happens it claims that everything is fine and it’s synchronizing successfully and it will happily upload to the server any files you modify locally, but any files modified on other computers and synchronized by them to the server don’t get pulled down to the computer that is in this broken state.

Let me say this again: it claims everything is working properly but it isn’t. That’s generally considered Really Bad.

You can get the client to start synchronizing again by restarting the client, but (a) it’s not clear to me that files which weren’t synchronized in the interim get synchronized when you restart, and (b) there are various data-loss and data-conflict scenarios which occur when you modify files on multiple computers when one or more of them aren’t synchronizing properly.

I don’t know the root cause of this, so I don’t know of any way to prevent the problem from happening. Therefore, instead I am now running a script every minute on all of my computers that sends and receives “pings” to/from the other computers in the group via temporary directories and files created within my Synology Drive directory. The script emails me when it doesn’t receive a “response” to a ping it sent to one of the other computers in the group. This means I’ll get some spurious emails when one of my computers is asleep or not on the network, but these are a small price to pay compared to the price of losing data because Synology Drive is failing again.

I haven’t reported this issue to Synology Drive because it’s intermittent and I have no idea how to reproduce it so I’m certain they’ll blow me off.

Here’s the script, for those of you who are curious.

#!/bin/bashset -eshopt -s nullglobPINGDIR=~jik/CloudStation/tmp/syno-pingsME=$(hostname --short)DEBUG=falseINTERVAL=60while [ -n "$1" ]; do    case "$1" in        -d|--debug) DEBUG=true; shift ;;        -i|--interval) shift; INTERVAL="$1"; shift ;;        -*) echo "Unrecognized option: $1" 1>&2; exit 1 ;;        *) break ;;    esacdone           if [ -z "$1" ]; then    echo "No remote host(s) specified" 1>&2    exit 1fidebug() {    if ! $DEBUG; then        return    fi    echo "$@"}file_age() {    local path="$1"; shift    now=$(date +%s)    if then=$(stat -c %Y "$path" 2>/dev/null); then        echo $((now-then))    else        echo missing    fi}wait_for() {    local delay="$1"; shift    local path="$1"; shift    age=$(file_age "$path")    if [ $age = missing ]; then        echo missing    elif ((age < delay)); then        echo waiting    else        echo finished    fi}      settling() {    local path="$1"; shift    case $(wait_for $((INTERVAL/2)) "$path") in        missing) echo missing ;;        waiting) echo yes ;;        finished) echo no ;;    esac}late() {    local path="$1"; shift    case $(wait_for $((INTERVAL*2)) "$path") in        missing) echo missing ;;        waiting) echo no ;;        finished) echo yes ;;    esac        }dohost() {    local them="$1"; shift    debug Working on pings from $ME to $them    # Note if we were previously broken.    set -- $PINGDIR/ping.$ME-$them.*/broken    if [ -n "$1" ]; then        was_broken=true    else        was_broken=false    fi    debug was_broken=$was_broken    # Clear any pings that have been answered    for ping in $PINGDIR/ping.$ME-$them.*/ack; do        dir=$(dirname $ping)        if [ $(settling $dir) = yes ]; then            debug Ignoring recently acknowledged ping $dir            continue        fi        debug Clearing acknowledged ping $dir        rm -rf $dir    done    # Check for old pings that have not been answered yet.    is_broken=false    for ping in $PINGDIR/ping.$ME-$them.*/syn; do        dir=$(dirname $ping)        if [ -f $dir/broken ]; then            debug $dir remains broken            continue        fi        if [ $(late $dir) = no ]; then            debug Ignoring recently generated ping $dir            continue        fi        is_broken=true        echo $(date) > $dir/broken        debug $dir is newly broken    done    if $was_broken && ! $is_broken; then        echo Pings from $ME to $them have recovered    elif ! $was_broken && $is_broken; then        echo Pings from $ME to $them are failing, one of us is not syncing 1>&2    fi    # Create a new ping.    newpingdir=$PINGDIR/ping.$ME-$them.$(date +%s)    mkdir $newpingdir    echo $(date) > $newpingdir/syn    debug Created $newpingdir/syn}# Respond to pings sent to me.for ping in $PINGDIR/ping.*-$ME.*/syn; do    dir=$(dirname $ping)    if [ -f $dir/ack ]; then        debug Ignoring already acknowledged ping $dir        continue    fi    result=$(settling $dir)    if [ $result = missing ]; then        # Other end deleted it        debug Ignoring $dir after it disappeared        continue    elif [ $result = yes ]; then        debug Ignoring recently received ping $dir        continue    fi    debug Responding to $dir    echo $(date) > $dir/ackdone    for them; do    case "$them" in        *\ *) echo no spaces allowed in host names 1>&2; exit 1 ;;    esac    dohost $themdone
www.synology.comSynology is increasingly relying on its own ecosystem for upcoming Plus models | Synology Inc.Centralize data storage and backup, streamline file collaboration, optimize video management, and secure network deployment to facilitate data management.

[1/2]
The first thing I was on the 3B+ 4 years ago. This was the start of my .

I have a lot of things since then. This post is a list of services in my Homelab and how I and my family use them.

Most used by the whole family and therefore my favorite are:
- Plex - for all moving pictures
- Synology Photos - for photos obviously
- Synology Drive - files and documents
- Home Assistant - for “Hey, you forgot to turn the lights off in your room, but never mind I can do it myself from the couch”.

Used by my wife together with me:
- Plane - project management
- Plausible Analytics - for analytics

[1/2]

another gotcha upon moving a user's home directory to another volume (in my case an encrypted APFS container on an external 2TB NVME drive via #thunderbolt), Dropbox will no longer use #FileProvider in #macOS for the sync operations. that means there isn't a `~/Library/CloudStorage/Dropbox` folder in my new homedir it just drops it in `~` like it used to.

then i was like well shit, who else isn't going to work correctly now? but #OneDrive is using the new location just fine. #SynologyDrive too!

Continued thread

Results of the #officesuite poll: Which office suite(s) do you use at least once a week? (in a typical week) [multiple responses possible]

#LibreOffice 59%, #MicrosoftOffice 42%, #GoogleDocs 25%, Other 7%

In "Other", people mentioned: 13x #Apple #iWorks, 8x #OnlyOffice, 8x text file editor (#emacs, #vim, #nano), 4x #Softmaker / #FreeOffice, 4x #OpenOffice, 3x #CollaboraOnline, 3x #LaTeX / #Pandoc, 2x #Inkscape, and 1x #Scribus #Cryptpad #Markdown #PapyrusAuthor #LotusSuite #SynologyDrive #Quip

Today's nerd rabbit hole: I set up #Syncthing on my #SteamDeck with the goal to sync files to my #Synology and then to other devices via #SynologyDrive.
Unfortunately it looks like Syncthing via Docker doesn't trigger the right inotify events in the host OS when it writes, so Synology Drive doesn’t pick up the changes it makes.

My crappy workaround: schedule a cron job every 5 minutes to copy the syncthing files in to Synology Drive. it ... works!