techhub.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A hub primarily for passionate technologists, but everyone is welcome

Administered by:

Server stats:

5.4K
active users

#streams

14 posts11 participants0 posts today

Rethinking Java Streams: Gatherer for more control and parallelism

Since version 8, Java has introduced an elegant, functional approach to processing data sets with the Streams API. The terminal operation collect(…) represents the bridge from the stream to a targeted aggregation – in the form of lists, maps, strings or more complex data structures. Until Java 20 the processing was done Collector-Instances were regulated, which internally consisted of a supplier, an accumulator, a combiner and optionally a finisher. This model works well for simple accumulations but has noticeable limitations, particularly for complex, stateful, or conditional aggregations.

  1. The semantics of gatherers
  2. Why Gatherers are more than just a “better collector”
  3. Integration with Streams API
    1. Sequentieller Gatherer
      1. Initialisation: The state of the gatherer
      2. Accumulation: Processing the input elements
      3. Emission: The control over the output
      4. Signature and purpose of the finisher
      5. Concrete example: chunking with remainder
      6. Interaction with accumulation
      7. No return – but effect through push()
    2. Parallel Gatherer
      1. Initializer – The creation of the accumulator
      2. Integrator – The processing of elements
      3. Combiner – The combination of partial accumulators
      4. Finisher – The transformation of the accumulator into the final result
      5. Interaction in parallel gatherers
  4. An example implementation  
    1. Initialiser:
    2. Integrator:
    3. Combiner:
      1. entries state1 insert
      2. entries state2 insert
    4. Finisher:

Java 21 was the new interface java.util.stream. The gatherer was introduced, significantly expanding the semantics and control over the accumulation process. A Collector passively collects data, acts as a Gatherer, and actively responds to the incoming elements, which is comparable to a specialised transducer in functional programming languages. Gatherers are particularly useful where procedural or stateful aggregation is necessary, and they also allow element insertion, filtering, skipping, and explicit termination of the gathering process – all within the framework of a functional composable architecture.

The semantics of gatherers

A Gatherer<T, R> describes the transformation of one Stream<T> into a result of type R under close control over the accumulation process. In contrast to Collector, which is in a sense a container for aggregation logic, the gatherer allows rule-based, state-dependent processing of inputs – including the ability to skip elements (Drop), to insert additionally (Inject) or to end processing early (FinishEarly).

To make this possible, a gatherer is based on the idea of ​​a Sink, which is called in the context of the stream processor. This sink receives every input element, can react to it and thus actively influences the flow of processing. The actual processing is done via a so-called Adapter Factory which manages the transitions between the aggregation states.

Why Gatherers are more than just a “better collector”

While the conventional Collector serves primarily as a final accumulation tool – i.e. to transfer the elements contained in the stream into a target structure such as a list, a map or an aggregation – it works Gatherer conceptually far beyond this role. It is not just an optimised or syntactically varied form of Collectors, but rather an independent mechanism that opens up new expressions for stream processing, both semantically and functionally.

The central difference lies in the expanded scope for action during the transformation: This means explicitly that a Gatherer can accumulate not only elements but also new, previously non-existent elements that can be fed into the data stream. This opens up the possibility, for example, of introducing initialisation values ​​at the beginning of a stream or placing control characters such as headers and footers specifically at the beginning or end – without artificially expanding the original data stream.

This creative freedom becomes particularly clear when dealing with conditions. Where a Collector usually operated with a simple accumulator, the state of which leads to a final result, can be a Gatherer work based on state – and allow this state to be influenced across several elements. This opens up new semantic horizons: For example, window operations can be formulated in which temporal or sequential logic is applied – such as aggregating data up to an inevitable “end” marker, or combining groups of elements that can only be identified by a particular order or content structure.

Even complex decision structures, such as those required in multi-stage parsing processes or when traversing decision trees, can be achieved using stateful ones that Gatherer implements elegantly and declaratively. The interface remains in the spirit of functional programming: transformation and aggregation can still be described separately, but the Gatherer ensures their connection in a way that was previously only possible through imperative or difficult-to-maintain stream hacks.

Another advantage is the controlled influence of past elements on current behavior. This is how one can Gatherer For example, making the decision to discard an element because a previous element set a certain context. This context sensitivity capability is particularly relevant in situations where data streams are structurally “not clean” – such as log files, inconsistent data exports, or natural language analysis.

A Gatherer is not just “better Collector”, but a fundamental new tool for data stream-based modeling of complex transformation logic. It opens up a design space in which state, transformation, context and accumulation can interact in a way that was previously only possible with considerable effort – or outside of the stream model. Anyone who has ever worked with stateful Gathererconstructions will notice how much this expands the expressiveness of functional stream architectures.

A concrete example: grouping with filter logic

Let’s imagine that we want to collect from a stream of strings only those elements that have a particular property and then group them – for example, all words longer than 5 characters, grouped by their first letter. This requirement can be met with a Collector formulate, but requires a combination of preprocessing (e.g. filter(…)) and downstream grouping. With a Gatherer On the other hand, this combined process can be represented elegantly, comprehensively and in one step:

Gatherer<String, ?, Map<Character, List<String>>> gatherer =    Gatherer.ofSequential(        () -> new HashMap<Character, List<String>>(),        (map, element, downstream) -> {            if (element.length() > 5) {                char key = element.charAt(0);                map.computeIfAbsent(key,  k -> new ArrayList<>()).add(element);            }            return true;        }    );

In this example, a decision is made for each element as to whether it will be included in the result. The logic is embedded directly into the Gatherer. The return value accurate signals that processing should continue. You would, at this point, instead of false return, and the stream would end prematurely – a behaviour that is not possible with conventional Collectors is not reachable like that.

Integration with Streams API

The interface Gatherer<T, A, R> explicitly distinguishes between more sequential and parelleler processing. The central distinction arises from the factory methods:

Gatherer.ofSequential(…) // Can only be used sequentially

Gatherer.ofConcurrent(…) // Suitable for parallel streams

A gatherer who comes with ofConcurrent(…) may be used in parallel streams, but must meet certain requirements: it must be thread-safe or rely on thread-isolated accumulators. This is similar to the logic of parallel collectors, where internal state management allows different elements to be processed simultaneously in independent threads.

Sequentieller Gatherer

Especially at sequential processing—i.e., if there is no parallelisation—the Gatherer develops its full expressiveness while remaining simple, type-safe, and deterministic.

The functionality of a sequential gatherer can be divided into three main phases: initialisation, accumulation and Emission. Each of these phases is described in detail below, with particular attention to the unique features of sequential processing.

Initialisation: The state of the gatherer

Each gatherer has an internal state that is recreated per stream execution. This condition is about one Supplier<S> defined, where S represents the type of condition. In sequential processing, this state is reserved exclusively for a single thread; therefore, no thread safety requirements exist. This means that simple objects like ArrayList, StringBuilder, counter arrays, and custom records can be used without problems.

A typical condition could e.g. B. look like this:

record ChunkState<T>(List<T> buffer, int chunkSize) {}

The associated supplier:

() -> new ChunkState<>(new ArrayList<>(chunkSize), chunkSize)

The state lives for the entire stream run and serves as context for all further processing steps. Initialisation lays the foundation for state-based logic, such as buffering, counting, aggregation or tracking previous elements.

Accumulation: Processing the input elements

The accumulation function is the heart of every gatherer. It is called for each element of the input stream. The signature of this function is:

(state, input, downstream) -> { … }

This is where the actual logic happens: The input element is brought into the state, and – depending on the current state – a decision can be made as to whether (and if necessary, how many) output values ​​are produced. The decision as to whether an item is passed downstream rests entirely with the gatherer.

Example: Every third element of a stream should be emitted.

Gatherer<String, ?, String> everyThird() {    return Gatherer.ofSequential(        () -> new int[]{0},        (state, element, downstream) -> {            state[0]++;            if (state[0] % 3 == 0) {                downstream.push(element);            }        }    );}

In contrast to classic filter or map operations, this logic is conditional and imperative: The gatherer remembers how many elements have already been processed and only emits a result for every third. The accumulation logic is, therefore, comparable to that of the accept () Method of a specially defined consumer, but supplemented by downstream control.

Since there is no threading in sequential processing, all operations can be performed directly without synchronisation. The state can be arbitrarily complex and dynamic as long as it is updated correctly within the stream.

Emission: The control over the output

Elements are output via the Sink<R>-Object provided to the Gatherer upon each accumulation. With his method, push(R element) elements can be passed downstream in a targeted and controlled manner. Instead of map or flatMap, where each input leads to one or more outputs automatically transformed, the gatherer decides himself, if, at and was he emits.

For example, a gatherer can:

  • push individual output values,
  • push multiple values ​​at once (e.g. with chunking or tokenisation),
  • completely suppress the emission (e.g. under preconditions),
  • generate values ​​with a delay (e.g., at the stream’s end or after accumulation thresholds).

Example: Combining three consecutive elements into a string:

Gatherer<String, ?, String> triplets() {    return Gatherer.ofSequential(        () -> new ArrayList<String>(3),        (state, element, downstream) -> {            state.add(element);            if (state.size() == 3) {                downstream.push(String.join("-", state));                state.clear();            }        });}

The emission here only occurs when three elements have been collected. These are merged, pushed and the state is then emptied.

An often overlooked but essential component of a sequential gatherer is the Finisher – i.e. the closing function after the last input element. This phase is crucial because often during regular accumulation Items retained in state which will only be done at a later date or even no longer through regular accumulation can be emitted. The finisher ensures that such remaining elements or aggregated partial results are not lost but are correctly transferred downstream.

Signature and purpose of the finisher

The closing function has the signature:

BiConsumer<State, Sink<R>>

she will after all input values ​​have been processed called by the stream framework – exactly once. In this function, the final state can be accessed and, based on this state, a decision can be made as to whether (additional) output values ​​should be created.

The finisher is particularly suitable for:

  • Partially filled buffers, for example in chunking operations when the last block does not reach full size,
  • Final aggregations, e.g. B. in averaging, summation, hash calculation or protocol completion,
  • Finalisation of state machines, e.g. B. if an open state still needs to be completed,
  • Cleaning or logging, e.g. B. statistical outputs or final indicators.

Concrete example: chunking with remainder

Let’s look again at the example of a gatherer that groups elements into groups of three. Without finishers, if the number of elements is odd, the last one or two values ​​would be lost:

Gatherer<String, ?, String> triplets() {    return Gatherer.ofSequential(        () -> new ArrayList<String>(3),        (state, element, downstream) -> {            state.add(element);            if (state.size() == 3) {                downstream.push(String.join("-", state));                state.clear();            }        },        (state, downstream) -> {            if (!state.isEmpty()) {                downstream.push(String.join("-", state));            }        }    );}

In the closing function (finish), it is explicitly checked whether there are still elements in the state—i.e., whether the buffer is incomplete. These residual values ​​are then combined into an aggregate and pushed.

Without the finisher there would be the gatherer functionally incomplete: For input sets that are not a multiple of three, the last chunk would simply be discarded – a classic off-by-one error.

Interaction with accumulation

The finisher is semantically separated from the accumulation logic, but accesses the same state. This means that, depending on the specific application, it can use the same auxiliary functions or serialisation routines as the accumulation itself. In practice, it is advisable to define auxiliary methods for common logic such as “combine and empty the list” in order to avoid redundancy.

No return – but effect through push()

The finisher gives no return value back, but – like the accumulation function – works via what is provided Sink. So it doesn’t find any return semantics, instead a controlled completion of processing push() views.

The finisher of a sequential gatherer is the binding conclusion of the processing model. He guarantees that all information remaining in the state is processed and, if necessary, emitted. Especially in data stream-based applications where incomplete blocks, open ends, or residual states are typical, the finisher is essential to avoid data loss and ensure semantic correctness. Therefore, the finisher has a clean gatherer design that is not optional but rather an integral part of a well-defined stream processing step.

A sequential gatherer combines:

  • the state handling of an aggregator,
  • the control flow options of a parser,
  • the expressiveness of an imperative processor,
  • and the clarity of functional APIs.

By foregoing parallelisation logic and concurrency, the sequential variant allows a gatherer to be developed with minimal overhead and maximum expressiveness – a tool that combines both the flexibility of imperative logic and the composition of functional programming.

Parallel Gatherer

A parallel gatherer is for parallel Data processing pipelines are responsible for the four phases initialiser, integrator, combiner and finisher can be explicitly separated and controlled from each other.

Initializer – The creation of the accumulator

The method initialiser defines how a new accumulator (internal state) is created. This is the first step in processing each substream in sequential and parallel pipelines. 

The signature is also: Supplier<A> initializer();

In parallel processing, this initialiser is called several times – once per substream, i.e. per thread that takes over a split of the data. This ensures that no synchronisation within the accumulator is necessary: ​​each thread operates in isolation with its own state.

Integrator – The processing of elements

The integrator is a central function for inserting stream elements into the accumulator. It is one BiConsumer<A, T>, i.e. a function for each element T the Accumulator A changed accordingly.

The signature reads: BiConsumer<A, ? super T> integrator();

In parallel streams, this integrator is also applied to partial accumulators. What is important here is that this function may only change the accumulator locally and may not influence any global states.

Combiner – The combination of partial accumulators

The combiner’s task is to combine several independently processed accumulators into one overall accumulator. This phase is only relevant in parallel stream executions. The combiner receives two partial results—typically from two threads—and has to combine them into a common result.

The signature is: BinaryOperator<A> combiner();

The correct implementation of the combiner is essential for the correctness of the parallel execution. It must be associative. This is the only way the JVM can freely distribute the operation across multiple threads.

Finisher – The transformation of the accumulator into the final result

The finisherfunction transforms the accumulator A into the desired result R. While A is used internally to work efficiently during aggregation R the actual result of the entire operation – such as an immutable collection, a merged string, an optional, a report object, etc.

The signature reads: Function<A, R> finisher();

Unlike the integrator and combiner, the finisher becomes accurate once called, at the end of the entire processing chain. It therefore serves as a bridge between the internal aggregation mechanism and external representation.

Interaction in parallel gatherers

In a parallel stream with gatherer-based collector, the following happens:

  1. An initialiser is called for each partial stream (split) to create a local accumulator.
  2. The integrator processes all elements in the respective substream one after the other and modifies the local accumulator.
  3. The combiner phase is called as soon as two partial accumulators need to be combined. This happens either through ForkJoin merging or in the final reduction step.
  4. After all partial accumulators have been combined, the finisher is called to calculate the final result.

This explicit separation and the ability to control each phase make Gatherer a powerful tool for complex, stateful, or stateless aggregations—especially when performance through parallelisation is crucial or custom aggregation logic is required.

An example implementation  

Let’s first define the gatherer in general and put it in a stream.

Gatherer<Integer, ?, ConcurrentMap<Integer, List<Integer>>> concurrentGatherer =   Gatherer.of(       initializer,       integrator,       combiner,       finisher   );IntStream.rangeClosed(0, END_INCLUSIVE)   .boxed()   .parallel()   .gather(concurrentGatherer)   .forEach(e -> System.out.println("e.size() = " + e.size()));

Now we define the respective subcomponents:

Initialiser:

var initializer = (Supplier<ConcurrentMap<Integer, List<Integer>>>) ConcurrentHashMap::new;

Integrator:

var integrator = new Gatherer.Integrator<   ConcurrentMap<Integer, List<Integer>>,   Integer,   ConcurrentMap<Integer, List<Integer>>>() { @Override public boolean integrate(ConcurrentMap<Integer, List<Integer>> state,                          Integer element,                          Gatherer.Downstream<                              ? super ConcurrentMap<Integer, List<Integer>>> downstream) {   if (element > END_INCLUSIVE) return false; //processing can be interrupted   int blockStart = (element / 100) * 100;   state       .computeIfAbsent(blockStart, k -> Collections.synchronizedList(new ArrayList<>()))       .add(element);   return true; }};

The integrator is responsible for processing individual stream elements (here: Integer) and their insertion into a shared state (ConcurrentMap). The element is sorted according to a specific grouping criterion: all elements that are in the same block of 100 (e.g. 0–99, 100–199, …), are entered in the same list.

There is a special feature in this implementation:

if (element > END_INCLUSIVE) return false;

This condition serves as Abort signal: as soon as an element is above a specified limit (END_INCLUSIVE), processing is completed by returning false canceled. This is a special feature of the Gatherer-Model: The return value false signals that no further elements should be processed – a type early termination.

state  .computeIfAbsent(blockStart, k -> Collections.synchronizedList(new ArrayList<>()))  .add(element);

This line does the actual grouping: If under the key blockStart, if no list already exists, a new, synchronised one will be created. ArrayList created.

The current item is then added to this list.

By using Collections.synchronizedList(…) becomes even within a parallel gatherer context ensures that list accesses are thread-safe – even though the ConcurrentMap itself is only responsible for map access, not for the values ​​it contains.

The integrator therefore defines the following processing semantics:

  • Elements are grouped by blocks of 100 (0–99, 100–199, etc.).
  • The assignment is done via a ConcurrentMap, where each block contains a list.
  • The lists themselves are synchronised to allow concurrency within the list operations.
  • By returning false can processing ended early become – e.g. B. when a limit value is reached.

Combiner:

var combiner = new BinaryOperator<ConcurrentMap<Integer, List<Integer>>>() { @Override public ConcurrentMap<Integer, List<Integer>> apply(ConcurrentMap<Integer, List<Integer>> state1,       ConcurrentMap<Integer, List<Integer>> state2) {   var mergedMap = new ConcurrentHashMap<Integer, List<Integer>>();   // fill in state1   state1.forEach((key, value) ->                      mergedMap.merge(key, value, (v1, v2) -> {                        v1.addAll(v2);                        return v1;                      })   );   // fill in state 2   state2.forEach((key, value) ->                      mergedMap.merge(key, value, (v1, v2) -> {                        v1.addAll(v2);                        return v1;                      })   );   return mergedMap; }

First, a new empty, thread-safe map is prepared to contain all entries state1 and state2 should be inserted. This new map is deliberately fresh because neither state1 still state2 should be changed – this protects against unwanted side effects and makes the function work referentially transparent.

var mergedMap = new ConcurrentHashMap<Integer, List<Integer>>();

entries state1 insert

state1   .forEach((k,v) -> {     mergedMap         .computeIfAbsent(k, k1 -> new ArrayList<>())         .addAll(v);   });

This method computeIfAbsent checks whether in the target map mergedMap already an entry for the key k exists. If this is not the case, the lambda is used k1 -> new ArrayList<>() a new entry is created and inserted. The method guarantees that an existing, modifiable list is returned afterwards – regardless of whether it was just created or already existed.

The method addAll(…) hangs all elements of the list v out of state1 to the corresponding list in mergedMap to. This expands the aggregate state for this key in the target map.

entries state2 insert

The same process is then repeated for state2 repeated:

state2.forEach((key, value) ->    mergedMap.merge(key, value, (v1, v2) -> {      v1.addAll(v2);      return v1;    }));

Every entry is made here state2 in the mergedMap transmitted. If the key does not yet exist, the value (value, one List<Integer>) taken directly. If the key is already in mergedMap, it exists by merge(…) using a custom merge strategy: the list contents of both maps are merged v1.addAll(v2) combined. What is important here is that v1 is the existing list, and v2 is the newly added one.

In the end, the newly created, combined map is returned—it represents the complete aggregate state, which contains the contents of both partial states.

Finisher:

var finisher = new BiConsumer<    ConcurrentMap<Integer, List<Integer>>,    Gatherer.Downstream<? super ConcurrentMap<Integer, List<Integer>>>>() {  @Override  public void accept(      ConcurrentMap<Integer, List<Integer>> state,      Gatherer.Downstream<? super ConcurrentMap<Integer, List<Integer>>> downstream) {    downstream.push(state);  }};

This implementation of the finisher is minimalistic but functionally correct: it takes the final state (state) of the accumulator – one ConcurrentMap<Integer, List<Integer>> – and hands it directly to him downstream, i.e. the next stage of the processing chain in the stream.

The attribute state is the aggregated result of the previous steps (initialiser, integrator, combiner). In this case it is a map that maps blocks of 100 (Integers) to a list of the associated values ​​(List<Integer>).

The attribute downstream is a push receiver that consumes the end result. It abstracts the next processing stage, such as a downstream map, flatMap, or terminal collection process.

The method push(…) of the downstream object explicitly forwards the finished result to the next processing stage. This is fundamentally different from conventional collector Concepts, where the end result is simply returned.

This type of handover makes it possible, in particular, in one within Gatherer defined, stateful context to deliver multiple or even conditional results – for example:

  • Streaming of intermediate results (e.g. after a specific batch)
  • Quitting early after the first valid result
  • Multiple edition during partitioning

In this specific case, however, precisely one result was passed on—the fully aggregated map. This classic “push-forward finisher” determines the condition as a result emitted.

We have now examined the Gatherer in detail and pointed out the differences in sequential and parallel processing. So, everything should be together for your first experiments.

Happy Coding

Sven

For today's VOD we're back playing some more Monster Hunter, but not before we start of with a bit of Factorio. We'll see more of these sort of streams come back for a while as we play more multiplayer games, as scheduling with others was a big factor in deciding what to stream on a given day.

youtu.be/szT1jplfZg4

youtu.be- YouTubeEnjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube.
#Game#Games#Gaming
Replied in thread
@AJ Sadauskas
I mean, the Fediverse already has Lemmy, KBin, and MBin.

So there's already an ecosystem of pre-built communities out there.

/kbin is dead. Has been since last year. The last instances that haven't moved to Mbin are withering away.

However, in the "Lemmy clone" category, there's also PieFed, and Sublinks is still in development.

Also, the Facebook alternative Friendica ("Facebook alternative" not as in "Facebook clone", but as in "better than Facebook") has had groups since its launch in, 2010, five and a half years before Mastodon. Hubzilla has had groups since 2012 when it still was a Friendica fork named Red. (streams) (2021) and Forte (2024) have groups, too. All four are part of the same software family, created by the same developer. And interacting with their groups from Mastodon is somewhat smoother than interacting with a Lemmy community.

On Friendica, a group is simply another user account, but with different settings: In "Mastodon speak", it automatically boosts any DM sent to it to all its followers. In reality, it's a little more complicated because, unlike Mastodon, Friendica has a concept of threaded conversations. (No, seriously, Mastodon doesn't have it. If you think Mastodon has it, use Friendica for a year or two as your only daily driver, and then think again.)

Likewise, on Hubzilla, (streams) and Forte, it's another channel with similar settings.

CC: @myrmepropagandist @Jasper Bienvenido @sebastian büttrich @Asbestos

#Long #LongPost #CWLong #CWLongPost #FediMeta #FediverseMeta #CWFediMeta #CWFediverseMeta #Fediverse #FediverseGroups #Groups #PieFed #Sublinks #Friendica #Hubzilla #Streams #(streams) #Forte
joinfediverse.wikiFriendica - Join the Fediverse
Replied in thread
@Jorge Candeias Bad idea. (Hubzilla user here.)

Hashtags are not only for discoverability (and critically so on Mastodon). They're also the preferred way of triggering the automatic generation of individual reader-side content warnings.

Content warnings that are automatically generated for each user individually based on keyword lists have a long tradition in the Fediverse. Friendica has had them long before Mastodon even existed, much less before Mastodon hijacked the summary field for content warnings. Hubzilla has had them since its own inception which was before Mastodon, too. (streams) has them, Forte has them.

On all four, automated reader-side content warnings are an integral part of their culture. And users of all four (those who are not recent Mastodon converts at least, i.e. those who entered the Fediverse by joining Friendica in the early 2010s) insist in automated reader-side content warnings being vastly better than Mastodon's poster-side content warnings that are forced upon everyone all the same.

Oh, and by the way, Mastodon has this feature, too. It has only introduced it in October, 2022, and since the re-definition of Mastodon's culture in mid-2022 pre-dates it, it is not part of Mastodon's culture. But Mastodon has this feature.

However, in order for these content warnings to be generated, there needs to be a trigger. The safest way is by hashtags: If you post content that not everyone may want to see, add corresponding hashtags, enough to cover as many people as possible. If you don't want to see certain content right away, add the corresponding hashtags as keywords to NSFW (Friendica, Hubzilla, (streams), Forte) or a CW-generating filter (Mastodon).

In fact, hashtags can also be used to completely filter out content that you don't want to see at all. And they can be used to trigger such filters. This should work everywhere in the Fediverse.

I myself post stuff that some people don't want to see all the time. Hence, I need a whole lot of hashtags.

Let me explain the "hashtag wall" at the bottom of this comment to you.

  • #Long, #LongPost
    This comment is over 500 characters long. Many Mastodon users don't want to see any content that exceeds 500 characters. They can filter either or both of these hashtags and at least get rid of my content with over 500 characters.
    Why two hashtags? Because I can't know beforehand which one of them people will filter. And because I can't know beforehand which of one of them people will search for or follow.
  • #CWLong, #CWLongPost
    The same as above, but making clear that it's supposed to stand in for a content warning ("CW: long (over 8,300 characters)"). Also, filtering these instead of the above has less of a chance of false positives than the above.
    Why two hashtags? Because I can't know beforehand which one of them people will filter. And because I can't know beforehand which of one of them people will search for or follow.
  • #FediMeta, #FediverseMeta
    This comment contains Fediverse meta content. Some people don't want to read anything about the Fediverse, not even as by-catch or boosted to them by someone whom they follow or even only on their federated timeline. They can filter either or both of these.
    Why two hashtags? Because I can't know beforehand which one of them people will filter. And because I can't know beforehand which of one of them people will search for or follow.
  • #CWFediMeta, #CWFediverseMeta
    The same as above, but making clear that it's supposed to stand in for a content warning ("CW: Fediverse meta" or, in this case, "CW: Fediverse meta, Fediverse-beyond-Mastodon meta").
    Why two hashtags? Because I can't know beforehand which one of them people will filter. And because I can't know beforehand which of one of them people will search for or follow.
  • #Fediverse
    This comment is about the Fediverse. If you don't like it, you can filter it out. Otherwise, click it or tap it to find more content on the topic. Also, the hashtag helps people looking for content about the Fediverse find my comment.
  • #Mastodon
    This comment touches Mastodon as a topic. If you don't like it, you can filter it out. Otherwise, click it or tap it to find more content on the topic. Also, the hashtag helps people looking for content about Mastodon find my comment.
  • #Friendica
    This comment touches Friendica as a topic. If you don't like it, you can filter it out. Otherwise, click it or tap it to find more content on the topic, especially if you don't know what the hell Friendica is, but you're curious. Also, the hashtag helps people looking for content about Friendica find my comment.
  • #Hubzilla
    This comment touches Hubzilla as a topic. If you don't like it, you can filter it out. Otherwise, click it or tap it to find more content on the topic, especially if you don't know what the hell Hubzilla is, but you're curious. Also, the hashtag helps people looking for content about Hubzilla find my comment.
  • #Streams, #(streams)
    This comment touches (streams) as a topic. If you don't like it, you can filter it out. Otherwise, click it or tap it to find more content on the topic, especially if you don't know what the hell the streams repository is, but you're curious. Also, the hashtag helps people looking for content about (streams) find my comment.
    Why two hashtags if they're the same on Mastodon? Because they are not the same on Friendica, Hubzilla (again, that's where I am), (streams) itself and Forte. If I have to choose between catering to the technologies and cultures of Friendica, Hubzilla, (streams) and Forte and catering to Mastodon's, I will always choose the former.
  • #Forte
    This comment touches Forte as a topic. If you don't like it, you can filter it out. Otherwise, click it or tap it to find more content on the topic, especially if you don't know what the hell Forte is, but you're curious. Also, the hashtag helps people looking for content about Forte find my comment.
  • #MastodonCulture
    This comment touches Mastodon culture as a topic. If you don't like it, you can filter it out. Otherwise, click it or tap it to find more content on the topic, including critical views upon how Mastodon users try to force Mastodon's 2022 culture upon the users of Fediverse server applications that are very different from Mastodon, and that have had their own culture for much longer. Also, the hashtag helps people looking for content about Mastodon culture find my comment.
  • #Hashtag, #Hashtags
    This comment touches hashtags as a topic. If you don't like it, you can filter it out. Otherwise, click it or tap it to find more content on the topic. Also, the hashtag helps people looking for content about hashtags and their implications find my comment.
    Why two hashtags? Because I can't know beforehand which one of them people will filter. And because I can't know beforehand which of one of them people will search for or follow.
  • #HashtagMeta
    This comment contains hashtag meta content. Some people don't want to read anything about it, not even as by-catch or boosted to them by someone whom they follow or even only on their federated timeline. They can filter either it.
  • #CWHashtagMeta
    The same as above, but making clear that it's supposed to stand in for a content warning ("CW: hashtag meta").

By the way: Hashtags for triggering filters are even more important on Hubzilla in comments when Mastodon users may see them. That's because Hubzilla cannot add Mastodon-style content warnings to comments (= everything that replies to something else; here on Hubzilla, it's very different from a post that isn't a reply). What's a content warning on Mastodon is still (and justifiedly so) a summary on Hubzilla. But from a traditional blogging point of view (Hubzilla can very much be used for full-fledged long-form blogging with all bells and whistles), a summary for a comment doesn't make sense. Thus, the comment editors have no summary field on Hubzilla. Thus, I can't add Mastodon-style CWs to comments here on Hubzilla.

#Long #LongPost #CWLong #CWLongPost #FediMeta #FediverseMeta #CWFediMeta #CWFediverseMeta #Fediverse #Mastodon #Friendica #Hubzilla #Streams #(streams) #Forte #MastodonCulture #Hashtag #Hashtags #HashtagMeta #CWHashtagMeta
joinfediverse.wikiHubzilla - Join the Fediverse

And we've officially started VODs for 2020! Man it still baffles me how it can both feel like yesterday and at the same time forever ago that we played Monster Hunter World. I remember playing the original on our family PS2 decades ago using PlayStation Online to hunt Lao Shan Lung, to playing MHFU with my brother on our PSPs. The series has come a long way from the days of superman poses and room-based boss fights.

youtu.be/Xd7J_2XrcgU

youtu.be- YouTubeEnjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube.
#Game#Games#Gaming

Happy Friday! Today we have the second VOD for our #Kingdom playthroughs. All in all, I prefer Two Crowns, especially after returning to it years later. I wish there was more of an open sandbox version, but I'm assuming you'd just never leave the island, especially once you close the small portals.

Maybe I just like colony builders...

youtu.be/qxRQ_BGRVcE

youtu.be- YouTubeEnjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube.
#Game#Games#Gaming
@zeitverschreib [friendica] Ich bin auf Hubzilla und (streams), und ich kann sagen, das ist nicht gut fürs Muskelgedächtnis. Aber es ist machbar.

Von Friendica ist es natürlich sowieso ein Riesenumdenken, weil (streams) nicht einfach "Red Matrix 2.0" ist, also nicht einfach Friendica mit nomadischer Identität. Schon als Zap 2018 entstand, wo das UI allmählich umgebaut wurde, hatte Friendica seit sieben Jahren neue Entwickler, die schon auf die Entwicklung von Red Matrix und Hubzilla keine Rücksicht mehr nahmen.

(streams) faßt jetzt fast alle Einstellungen unter Burgermenü > Einstellungen zusammen. Das heißt, das Herumgehühner mit den Zahnrädern oben links entfällt, wobei es /settings/features immer noch gibt und es immer noch keinen Weg über das UI dahin gibt. Jedenfalls stellst du da auch die kanalweiten Berechtigungen ein (wobei "Kanal" keine Auswahl an Inhalten ist, die reinkommen, sondern eine Identität, von der du auf einem Konto mehrere separate haben kannst, quasi wie mehrere Friendica-Konten, aber mit einem und demselben Login).

Gewisse Features sind optional; wie Hubzilla ist auch (streams) hochgradig modular. Das heißt, die wirst du erst auf der Admin-Seite als "App" aktivieren müssen und dann auf der Nutzer-Seite als "App" "installieren" müssen. Einige Sachen, die auf Hubzilla noch eine App sind, sind auf (streams) in den Kern eingebaut.

Berechtigungsrollen (Hubzilla: Kontaktrollen) sind jetzt umgekehrt standardmäßig zumindest für Nutzer nicht mehr installiert, weil man die nicht mehr braucht. Auf Hubzilla sind ja Kontaktrollen zwingend notwendig, weil sie die einzige Möglichkeit sind, die Berechtigungen eines Kontakts zu steuern. Auf (streams) kannst du bei jedem Kontakt jede Berechtigung einzeln schalten; Berechtigungsrollen sind einfach nur Presets, um dir das Leben leichter zu machen, wenn du bei gewissen Kontakten eh immer dasselbe einstellst.

Übrigens: Was "teilen" auf Friendica ist, heißt auf Hubzilla und (streams) "wiederholen". "Teilen" auf Hubzilla und (streams) dürfte auf Friendica "mit Zitat teilen" sein.

Ansonsten guck dir mal meine Mastodon/Friendica/Hubzilla/(streams)-Vergleichstabellen an.

Support für (streams) gibt's bei @Streams. Außerdem kannst du dich an @Der Pepe (nomád) ⁂ ⚝ wenden, der betreibt auch zwei Hubzilla-Hubs und je einen (streams)- und Forte-Server, oder auch mal an mich. Am aktivsten bin ich hier auf Hubzilla; auf (streams) findest du mich als @Jupiter's Fedi-Memes on (streams) (mein Outlet für Fediverse-Memes; wenig aktiv) und @Jupiter Rowland's (streams) outlet (mein Outlet für Bilder aus OpenSim; noch weniger aktiv).

#Long #LongPost #CWLong #CWLongPost #LangerPost #CWLangerPost #FediMeta #FediverseMeta #CWFediMeta #CWFediverseMeta #Friendica #Hubzilla #Streams #(streams) #Forte
hub.netzgemeinde.euMastodon vs Facebook alternativesComparison between Mastodon, Friendica, Hubzilla and (streams)

Fast forward a few months, and here we are around Christmas of 2019, trying out a game I had at this point only heard of, called #Kingdom, which is a 2D side-scrolling tower defense game. These were quite fun, actually! I've even picked them back up here and there on my #SteamDeck recently, they're such gorgeous games!

youtu.be/rLjk7zfN1TI

youtu.be- YouTubeEnjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube.
#Game#Games#Gaming
Replied in thread
@Joaquim Homrighausen @Kevin Beaumont To be fair, full data portability via ActivityPub has only been available in a stable release of anything for two weeks.

That was when @Mike Macgirvin 🖥️'s Forte, created in mid-August of 2024 as a fork of his own streams repository and the latest member of a family of software that started in 2010 with Friendica, had its very first official stable release.

And, in fact, Forte just uses ActivityPub to do something that (streams) and its predecessors all the way to the Red Matrix from 2012 (known as Hubzilla since 2015) have been doing using the Nomad protocol (formerly known as Zot). It's called nomadic identity. This is technology that's over a dozen years old on software that was built around this technology from the get-go, only that it was recently ported to ActivityPub.

Now, nomadic identity via ActivityPub was @silverpill's idea. He wanted to make his Mitra nomadic. He started working in 2023. The first conversion of existing non-nomadic server software to nomadic still isn't fully done, much less officially rolled out as a stable release.

If Mastodon actually wanted to implement nomadic identity, they would first have to wait until Mitra has a first stable nomadic release. Then they would have to wait until nomadic identity on Mitra (and between Mitra and Forte) has become stable and reliable under daily non-lab conditions. (Support for nomadic identity via ActivityPub on (streams) worked nicely under lab conditions. When it was rolled out to the release branch, and existing instances upgraded to it, it blew up in everyone's faces, and it took months for things to stabilise again.)

Then they would have to look at how silverpill has done it and how Mike has done it. Then they would have to swallow their pride and decide to adopt technology that they can't present as their own original invention because it clearly isn't. And they would have to swallow their pride again and decide against making it incompatible with Mitra, Forte and (streams) just to make these three look broken and inferior to Mastodon.

And only then they could actually start coding.

Now look at how long silverpill has been working on rebuilding Mitra into something nomadic. This takes a whole lot of modifications because the concept of identity itself has to be thrown overboard and redefined because your account will no longer be your identity and vice versa. Don't expect them to be done in a few months.

#Long #LongPost #CWLong #CWLongPost #FediMeta #FediverseMeta #CWFediMeta #CWFediverseMeta #Fediverse #Mastodon #Mitra #RedMatrix #Friendica #Hubzilla #Streams #(streams) #Forte #DataPortability #NomadicIdentity
Summary card of repository fortified/forte
Codeberg.orgforteNomadic fediverse server.