techhub.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A hub primarily for passionate technologists, but everyone is welcome

Administered by:

Server stats:

4.6K
active users

#mutex

0 posts0 participants0 posts today

🔒💻 Behold, the latest masterpiece for those who dream of locking their #bash scripts in eternal embrace with a mutex! Because who doesn't love making the simplest tasks into a cryptic #semaphore puzzle? 🚀🤖 Meanwhile, #GitHub casually hints that #AI should write your code, perhaps after it untangles this weave of command line wizardry. 🙄
github.com/bigattichouse/waitl #scripting #mutex #coding #HackerNews #ngated

Linux commandline tool to provide mutex/semaphore process safety for long running bash/sh operations. - bigattichouse/waitlock
GitHubGitHub - bigattichouse/waitlock: Linux commandline tool to provide mutex/semaphore process safety for long running bash/sh operations.Linux commandline tool to provide mutex/semaphore process safety for long running bash/sh operations. - bigattichouse/waitlock

More interesting progress trying to make #swad suitable for very busy sites!

I realized that #TLS (both with #OpenSSL and #LibreSSL) is a *major* bottleneck. With TLS enabled, I couldn't cross 3000 requests per second, with somewhat acceptable response times (most below 500ms). Disabling TLS, I could really see the impact of a #lockfree queue as opposed to one protected by a #mutex. With the mutex, up to around 8000 req/s could be reached on the same hardware. And with a lockfree design, that quickly went beyond 10k req/s, but crashed. 😆

So I read some scientific papers 🙈 ... and redesigned a lot (*). And now it finally seems to work. My latest test reached a throughput of almost 25k req/s, with response times below 10ms for most requests! I really didn't expect to see *this* happen. 🤩 Maybe it could do even more, didn't try yet.

Open issue: Can I do something about TLS? There *must* be some way to make it perform at least a *bit* better...

(*) edit: Here's the design I finally used, with a much simplified "dequeue" because the queues in question are guaranteed to have only a single consumer: dl.acm.org/doi/10.1145/248052.

I recently took a dive into #C11 #atomics to come up with alternative queue implementations not requiring locking some #mutex.

TBH, I have a hard time understanding the #memory #ordering constraints defined by C11. I mean, I code #assembler on a #mos6502 (for the #c64), so caches, pipelines and all that modern crap is kind of alien rocket science anyways 😆.

But seriously, they try to abstract from what the hardware provides (different kinds of memory barrier instructions, IMHO somewhat easier to understand), so the compiler can pick the appropriate one depending on the target CPU. But wrapping your head around their definition really hurts the brain 🙈.

Yesterday, I found a source telling me that #amd64 (or #x86 in general?) always has strong ordering for reads, so no matter which oderding constraint you put in your atomic_load and friends, the compiler will generate the same code and it will work. Oh boy, how should I ever verify my code works on e.g. aarch64 without owning such hardware?

Continued thread

Nice, #threadpool overhaul done. Removed two locks (#mutex) and two condition variables, replaced by a single lock and a single #semaphore. 😎 Simplifies the overall structure a lot, and it's probably safe to assume slightly better performance in contended situations as well. And so far, #valgrind's helgrind tool doesn't find anything to complain about. 🙃

Looking at the screenshot, I should probably make #swad default to *two* threads per CPU and expose the setting in the configuration file. When some thread jobs are expected to block, having more threads than CPUs is probably better.

github.com/Zirias/poser/commit

Replied in thread

@drfootleg Another tip that has been plaguing me for a month!

If you have processes inside a Docker container and other processes outside the Docker container that need mutex protection such as I2C bus or SPI bus accesses, you need to map the bus (obviously) **and** the mutex folder!

I had the sensor access working great but could not figure out why my inside and outside "mutex protected" accesses were colliding...

#tui #daw for #jack in #rustlang

Rewrote the UI of the #midi sequencer with #ratatui for double buffering.

We're polyphonic now! Multithreaded, too - I put a #mutex between input/render/audio threads instead of message passing.

Doesn't even properly support Note Off events yet, but it does play a mean cheerful dirge. Playback cursor position still off, though.

Hooking this up to the VST host every time I restart either is bit of a faff. Gotta teach it to auto-connect the MIDI/audio ports...