techhub.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A hub primarily for passionate technologists, but everyone is welcome

Administered by:

Server stats:

4.8K
active users

#seaweedfs

1 post1 participant0 posts today
bbₜᵤₓᵢ<p><span class="h-card" translate="no"><a href="https://mastodon.social/@coresec" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>coresec</span></a></span> </p><p>Ich hab nichts verstanden, aber es klingt gut 👍 </p><p>Was genau kann man mit <a href="https://burningboard.net/tags/seaweedfs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>seaweedfs</span></a> anstellen?</p>
CoreSec<p>Soo <a href="https://mastodon.social/tags/seaweedfs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>seaweedfs</span></a> mag mich nicht oder ich bin zu blöd </p><p>Aber ich sehe <a href="https://mastodon.social/tags/S3" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>S3</span></a> scheint auch zu funktionieren das habe ich bereits am laufen daher Versuche ich erstmal damit </p><p>Mein alter <a href="https://mastodon.social/tags/docker" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>docker</span></a> host macht nämlich immer mehr Probleme da sind Lösungen gefragt 😄</p>
CoreSec<p>Ich habe gestern leider festgestellt das mein kleines <a href="https://mastodon.social/tags/nomad" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>nomad</span></a> Projekt ein Problem hat</p><p>Das <a href="https://mastodon.social/tags/seaweedfs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>seaweedfs</span></a> läuft instabil das muss ich natürlich erstmal in den Griff bekommen bevor ich weiter machen kann</p>
D. Moonfire<p>In about 3-5 days, I should have finally recovered from the corrupted hard drive. Did I mention using <code>rsync</code> to recover the good files takes a long time? In this case, about it will be about three weeks.</p><p>So far, it looks like I'm getting about 97% recovery. Of the 3% I can't, about two-thirds are recoverable from off-line backups once I pick them up. So, I lost about 1% of 6 TB.</p><p>Overall, recovering from SeaweedFS is much like recovering from a dying laptop hard drive, as opposed to Ceph which is much more difficult to recover if you don't have enough backup copies to restore from those.</p><p>It also emphasizes the point that erasure coding is cool, but my home lab doesn't have even remotely the volumes to pull it off. Given that each EC volume is ten shards, I shouldn't have played with it until I had at least eleven volume servers. And multiple mount points for one volume server has some bugs with ECs that I couldn't anticipate but have filed bugs for.</p><p><a href="https://polymaths.social/tags/seaweedfs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>SeaweedFS</span></a></p>
CoreSec<p>Soo mein <a href="https://mastodon.social/tags/nomad" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>nomad</span></a> , <a href="https://mastodon.social/tags/ansible" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ansible</span></a> , <a href="https://mastodon.social/tags/seaweedfs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>seaweedfs</span></a> und <a href="https://mastodon.social/tags/consul" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>consul</span></a> Project nimmt immer mehr form an bisher habe ich einen Single node wo ein <a href="https://mastodon.social/tags/docker" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>docker</span></a> Container mit CSI und CNI läuft und über <a href="https://mastodon.social/tags/traefik" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>traefik</span></a> via DNS angesprochen werden kann </p><p>Nächste Schritte sind einen zweiten Host anbinden und irgendwie die alten Daten migrieren</p>
D. Moonfire<p>I was in grade school when my mom first set up a RAID box in our house (where she ran business as a consultant). It was a relatively small thing, but she was doing consulting work on storage systems and I got to play with hardware RAID cards which was a lot of fun (I mean, I was ten and I was getting to play with a brand new Macintosh Plus, cutting edge PCs, and anything else she could convince a customer to buy for her).</p><p>The first time we lost a drive, she and I spent hours trying to puzzle out how to recover it. There is a big difference between the theory of how RAIDs work and actually sitting at a table ten minutes before school watching it slowly jump from 3% recovered to 4. I mean, it felt like the slowest thing since she was in the middle of a project and we needed the files.</p><p>After I got home, the first thing I did when I got home was rush over to see that it was only 80-something percent. That put me in a sour mood. :) It wouldn't be done for another couple of hours but then it worked! It finished about a half hour after she came home and we interrupted dinner to check it out.</p><p>That was cool.</p><p>It wasn't until a few months later that I found where it didn't work. The house didn't have exactly clean power, and 80s technology wasn't exactly as reliable as it is today, so we lost another drive. But in the middle of the RAID 5 recovery, we lost a third drive.</p><p>And then is when I realized the heartbreak of trying to fix something that couldn't be fix. Fortunately, it was only a small project then and we were able to recover most of it from memory and the files we did have.</p><p>We ended up upgrading the house to a 200 amp service and then I got some penalty chores of helping my dad run new electrical lines to her office so she could have better power so we stopped losing drives, but that's a different aspects of my childhood.</p><p>But it came out as a good lesson: drives will fail. It doesn't matter how big they are, no matter how much you take care of them, or anything else. It also taught me that RAID was ultimate fragile. It handles "little" failures but there is always a bigger failure.</p><p>Plus, history has strongly suggested that when my mother or I got stressed, computer have a tendency to break around us. Actually after the derecho and the <a href="https://d.moonfire.us/tags/entanglement-2021/" rel="nofollow noopener noreferrer" target="_blank">stunning series of bad luck</a> I had for three years, high levels of stress around me cause things to break. I have forty years of history to back that. Hard drives are one of the first things to go around me, which has given me a lot of interest in resilient storage systems because having the family bitching about Plex not being up is a good way to keep being stressed out. :D</p><p>I think that is why I gravitated toward Ceph and SeaweedFS. Yeah, they are fun, but the distributed network is a lot less fragile than a single machine running a RAID. When one of my eight year old computer dies, I'm able to shuffle things around and pull it out. Technology improves or I get a few hundred dollar windfall, get a new drive.</p><p>It's also my expensive hobby. :D Along with writing.</p><p>And yet, cheaper than LEGO.</p><p><a href="https://polymaths.social/tags/seaweedfs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>SeaweedFS</span></a> <a href="https://polymaths.social/tags/ceph" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Ceph</span></a></p>
D. Moonfire<p>Still stressed (MIL apologized for picking a fight, I apologized for getting angry), so I wrote a script that figures out which file is on what volume for my SeaweedFS to handle the corruption from the two failed drives.</p><p>Ended up learning that <code>jq</code> can URL encode paths:</p><pre><code>echo "some path/bob" | jq --slurp --raw-input --raw-output @uri </code></pre><p>And how to parse the metadata from SeaweedFS's file API to get the volumes that a file is on, then convert it into a column list I can easily grep.</p><pre><code> cat {{META_CACHE}} \ | grep "volume_id" \ | jq -s '.' \ | jq '[ .[] | { path: .FullPath, volumeId: .chunks[].fid.volume_id } ]' \ | jq '[.[] | [.path, .volumeId ]]' \ | jq --raw-output '.[] | @tsv' \ | column -t -N path,volume -s $'\t' </code></pre><p>Well, at least I learned a few new tricks. My intent is to get the paths that are volumes that are corrupted, <code>rsync</code> them off into a temporary location to recover as much as possible, delete the volumes, and then copy the files back. That should also give me a list of files I lost (which I'm hoping is not many).</p><p><a href="https://polymaths.social/tags/seaweedfs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>SeaweedFS</span></a></p>
CoreSec<p>So ich habe jetzt ein CSI Mittels <a href="https://mastodon.social/tags/seaweedfs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>seaweedfs</span></a> gebaut </p><p>Schade das ich das noch nicht kannte allgemein ist das ein schönes Projekt um den verteilten Speicher effektiver zu nutzen und ist sehr Ressourcen freundlich und easy einzurichten </p><p>Kann auch gleich S3</p>
D. Moonfire<p>Last night, I think I found an interesting bug with SeaweedFS. Years ago, when I first used it, I set up one volume server per machine even if I had multiple drives. So I used <code>-dir /tmp/fs-001,/tmp/fs-002</code> which worked great.</p><p>Then I decided to play a little with erase encoding (ec) which in the last few weeks I realized my home lab isn't nearly large enough to give me the benefit I needed (need &gt; 10 volume servers to really get benefit). So I decided to decode them, but the multiple drive points appear to confuse <code>ec.decode</code>.</p><p><a href="https://github.com/seaweedfs/seaweedfs/issues/6751" rel="nofollow noopener noreferrer" target="_blank">https://github.com/seaweedfs/seaweedfs/issues/6751</a></p><p>Which meant I had to delve into Nix's function calls so I could have it do one volume server per mount point so most of my new machines have three volume servers and I I'm done to one on the computer that I can no longer connect to a monitor and is chewing up its drives.</p><p>Ironically, this means I now have more than ten volume servers which means erasure encoding would work but I'm still going to decode as many as I can, destroy the corrupted ones (thanks to losing two drives last month and having the ten shards packed into five nodes), and then go from 010 (one extra copy in the data center, e.g., my house) to 020 (two copies in the house).</p><p>100 would be an extra copy in a different data center. I don't have two data centers, but if I ever got a couple home lab obsessed friends and a wireguard setup between them, I would consider that.</p><p>001 means an extra copy on the same server. Which pretty much negates the point of having a distributed network storage so I have no clue why that is an option.</p><p>Technically I could also use BackBlaze or DreamObjects as a remote replication, which I want to do with the photo shoot files but I have other things to play with there and I'm using <code>restic</code> to back that up to BackBlaze already, so not a priority other than just to play with new toys (like erasure coding which I know is "not for you, Jen").</p><p><a href="https://polymaths.social/tags/seaweedfs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>SeaweedFS</span></a></p>
D. Moonfire<p>Well, just figured out why I was having so much trouble with my network cluster: one of my drives just crapped out. Fortunately it was just a 2 TB slice and I have one copy so I should recover about 95% of it (I was already in the middle of trying to recover some corrupted volumes most of the week).</p><p>I think once I get the replacement, I'll bump up to 020 (two copies) but drop the erasure coding since it looks like that isn't worth the pattern until I get over ten nodes, which probably will never happen because this is just a home lab.</p><p>Oh well, as far as I can tell, I didn't lose any photo shoot.</p><p>Well, back to writing.</p><p><a href="https://polymaths.social/tags/seaweedfs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>SeaweedFS</span></a> <a href="https://polymaths.social/tags/homelabbing" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>homelabbing</span></a></p>
Stefano Marinelli<p>It's probably be <a href="https://mastodon.bsd.cafe/tags/SeaweedFS" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>SeaweedFS</span></a></p>
D. Moonfire<p>As I'm bringing the new node online, I found a couple corrupted volumes that SeaweedFS's shell <code>volume.balance -force</code> kept blowing up trying to copy it from a node to another.</p><p>Fortunately, I have one extra copy so I found the other volume that has the other copy, used <code>volume.copy</code> to copy the volume from a good server, then <code>volume.delete</code> to delete it off the bad server.</p><p>Now, <code>volume.balance</code> is happy.</p><p><a href="https://polymaths.social/tags/seaweedfs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>SeaweedFS</span></a></p>
ItzTrain<p>Still fucking with <a href="https://hachyderm.io/tags/seaweedfs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>seaweedfs</span></a> in the <a href="https://hachyderm.io/tags/homelab" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>homelab</span></a>. Thing does feel dope but the lack of documentation and understanding around what you think it does and what it actually does is a pretty big one. With not a lot out there to go on. It does have a ton of potential tho. </p><p><a href="https://hachyderm.io/tags/selfhosted" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>selfhosted</span></a> <a href="https://hachyderm.io/tags/selfhosting" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>selfhosting</span></a></p>
D. Moonfire<p>So, hacky trick with SeaweedFS when you have a danger of pinning and have a relatively small cluster (mine is only 40 TB with one extra copy or 010, so effectively 20 TB). Keep 10% free as a default.</p><pre><code>services.seaweedfs.clusters.default.volumes.c0.minFreeSpacePercent = 10; </code></pre><p>When you run out of space, the cluster volumes flip the read-only flag which means if you go and delete a terabyte or so off the system, they don't pick it up (because the individual volumes are read-only).</p><p>So what I do is change the free space to 5%, do the deletes, and then change it back to 10% once the threshold goes under. There were a few times when I had to do the same thing with Ceph. I suspect many distributed file systems really don't like to be more than 90% full in general.</p><p>I actually go one step further by having a <code>cluster.nix</code> file that includes things like the free percentage so I can change one file, do a deploy to the entire homelab, then do some maintenance, and change it back.</p><p>I really should set up helm or something to tell me about these things before they happen but I haven't really figured out how on Nix. It was just too overwhelming and I couldn't figure out any useful monitoring that would ping me when there was an issue.</p><p>I've been also ignoring a problem with my one little Raspberry Pi in the cluster. It only had 1 TB drive on it, but that was preventing me from rebalancing everything. I've used some of my tax refund to get a new node to replace it, but it will take a while as they race the tsunami of tariffs that are following the order.</p><p><a href="https://polymaths.social/tags/seaweedfs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>SeaweedFS</span></a> <a href="https://polymaths.social/tags/nixos" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>nixos</span></a></p>
ItzTrain<p>Aight.. So.. After fucking with this thing for the better part of the week in the <a href="https://hachyderm.io/tags/homelab" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>homelab</span></a>. I have thoughts. </p><p><a href="https://hachyderm.io/tags/seaweedfs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>seaweedfs</span></a>! Feels like an open-source project. It's aight so far, it's not as simple as you would think with it being one go binary, it's simple with one go binary. Feels like that nerdy ass teenager you knew about in high school. Fucked with the cool kids and was super smart.. Just never really felt like they were part of the crew! Yea this feels like that! </p><p><a href="https://hachyderm.io/tags/moosefs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>moosefs</span></a> Feels like a product! It's robust, I never really had to fuck with it after setup, solid way of getting shit done, been around for ever.. You gotta pay for the exra shit tho!... It feels like that plastic lining your grandma had on her couches! Fucking old, but that shit was always clean! This felt like grandma's house! </p><p><a href="https://hachyderm.io/tags/selfhosted" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>selfhosted</span></a> <a href="https://hachyderm.io/tags/selfhosting" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>selfhosting</span></a></p>
ItzTrain<p>I got <a href="https://hachyderm.io/tags/seaweedfs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>seaweedfs</span></a> working in the <a href="https://hachyderm.io/tags/homelab" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>homelab</span></a> now the way I want it ( possibly). I also came up on some 3.8 TB Micron 5120 or something enterprise ssd's that are 2 years old but still have 100% and 97% of it's life left according to smartclt. I'm still using <a href="https://hachyderm.io/tags/moosefs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>moosefs</span></a> and I don't know how I got here :) </p><p><a href="https://hachyderm.io/tags/selfhosted" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>selfhosted</span></a> <a href="https://hachyderm.io/tags/selfhosting" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>selfhosting</span></a></p>
ItzTrain<p>The one thing I gotta mention about this <a href="https://hachyderm.io/tags/seaweedfs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>seaweedfs</span></a> thing is this volume size limit thing. Do you make it bigger, do you make it smaller. I fucked up and now I can't get the volumes the right size. It's sort of unforgiving. So far my linux ISO's are working pretty good on a object store type system. rclone does require some tuning to make it work and the cache not eat up everything. </p><p><a href="https://hachyderm.io/tags/homelab" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>homelab</span></a> <a href="https://hachyderm.io/tags/selfhosted" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>selfhosted</span></a> <a href="https://hachyderm.io/tags/selfhosting" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>selfhosting</span></a></p>
ItzTrain<p>Ok So it took me a while and I still feel like I'm not fully sure of <a href="https://hachyderm.io/tags/seaweedfs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>seaweedfs</span></a> . I got 3 master servers ( which I don't think I need). I have 1 filer and one s3 gateway and one volume server ( right now). I'm uploading to it right now and using rclone ( which is pretty dope) to mount them to my services. Right now it's just Linux ISO's. </p><p><a href="https://hachyderm.io/tags/homelab" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>homelab</span></a> <a href="https://hachyderm.io/tags/selfhosted" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>selfhosted</span></a> <a href="https://hachyderm.io/tags/selfhosting" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>selfhosting</span></a></p>
ItzTrain<p>Started my distributed file system journey with <a href="https://hachyderm.io/tags/moosefs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>moosefs</span></a> . Been rock solid. Then I found <a href="https://hachyderm.io/tags/seaweedfs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>seaweedfs</span></a> and it comes with a build in s3 gateway, It's built in go.. Seems simple enough to give it a try in the <a href="https://hachyderm.io/tags/homelab" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>homelab</span></a> . Have we lost the ability to write documentation..just code! . 2 hours later. I have 7 terminal tabs open all running a 4 specific process to get this thing working.. and its still not working! 😂. Documentation says. Just run ./this_fucking_command. Then run ./this_fucking_command and the thing is up.. simple. </p><p><a href="https://hachyderm.io/tags/selfhosted" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>selfhosted</span></a> <a href="https://hachyderm.io/tags/selfhosting" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>selfhosting</span></a></p>
Dan ⁂<p>so many tutorials and guides use <a href="https://beoriginal.social/tags/minio" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>minio</span></a> for a <a href="https://beoriginal.social/tags/selfhosted" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>selfhosted</span></a> <a href="https://beoriginal.social/tags/S3" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>S3</span></a> interface, but looking into their license that seems highly unpractical?!</p><p>anyone using or running <a href="https://beoriginal.social/tags/seaweedfs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>seaweedfs</span></a>? seems like a great alternative.</p><p><a href="https://github.com/seaweedfs/seaweedfs" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">github.com/seaweedfs/seaweedfs</span><span class="invisible"></span></a></p>