techhub.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A hub primarily for passionate technologists, but everyone is welcome

Administered by:

Server stats:

4.6K
active users

#localai

3 posts2 participants0 posts today

**ACE-Step** — фреймворк для генерации музыки на локальной машине
🎼 Треки до 4 хв за 20 сек
⚡️ В 15× швидше Suno AI
🎛️ Контроль жанру, тексту, ритму
🧠 Працює з 19 мовами
🎚️ Редагує існуючий аудіо
💻 A100 / 4090 / 3090 (мін. 16ГБ VRAM)
GitHub: [github.com/ace-step/ACE-Step](
\#MusicGen #AIaudio #OpenSource #LLM #GenAI #TextToMusic #LocalAI #RTX4090 #AItools #GeekStack #Python #cuda

**ACE-Step** — open-source фреймворк для генерации музыки “как у Suno”, только локально и без соплей.
⚙️ Жрёт текст, стиль, жанр, теги — выдаёт трек с нормальной мелодией, ритмом и даже гармонией.
🚀 До 4 минут звуку за ~20 секунд на A100. В 15 раз быстрее LLM-монстров.
🧩 Генерит, ремикширует, маскует, заменяет строки, делает вариации.
🧠 19 языков, включая русский.
🎧 От техно до оркестра — без лишних API-ключей и очередей.
🎮 Работает на A100 / 4090 / 3090, минимум 16 ГБ RAM, Python + CUDA.
👉 GitHub: github.com/ace-step/ACE-Step
#AIAudio #ACEstep #GenMusic #OpenSource #RTX4090 #CUDAcore #LLMsucks #SunoAlt #LocalStack #DevRig #GeekTools

ACE-Step — это не просто ещё один генератор музыки. Это *реальный прорыв в том, как ИИ превращается в инструмент*, а не в сервис по подписке. Почему это важно:
🚀 **1. Скорость, близкая к реальному времени**
4 минуты финализированного трека за ~20 секунд рендера — это уже не "ждать результат", а **генеративный live-loop**, как у музыканта в DAW.
Ни Suno, ни Udio, ни LLM-генераторы не дают такой latency. Здесь — практически instant feedback. Можно строить процесс продакшна как с сэмплером или synth engine.
🔧 **2. Полный контроль и редактируемость**
Возможность:
менять жанр и стиль без потери ритмики,
редактировать текст и фразы,
маскировать участки и встраивать новые — делает это **настоящим DAW-компаньоном**, а не просто "black-box генератором".
🧠 **3. Генеративная музыка стала **open-source**
Всё это — локально, open-source, без API ключей, лимитов и цензуры.
Это то, чего не было с MidJourney в графике и Suno/Udio в музыке: **независимый pipeline, разворачиваемый у себя**.
Ты можешь не просто использовать — ты можешь **встраивать, модифицировать, хакать**.
🎛️ **4. Это уже не игрушка — это production-ready**
19 языков, профессиональное звучание, управляемая структура и поддержка микса/мастеринга.
Подходит для:
синтезаторов с AI backend’ом,
генерации треков под видео, игры, фон, рекламу,
авто-написания демо, подкастов, фоновых саундтреков.
🧬 **5. Это новая веха: генеративный ИИ стал инструментом, а не платформой**
Переход от модели-сервиса к **модели-библиотеке**.
Это как сравнивать Google Translate и локальную модель для встраивания в продукт.
Это shift: **LLM-мозг** + **DAW-интеграция** + **музыкальная интерактивность** = новое поколение ПО для творчества.
ACE-Step — это та точка, после которой "ИИ делает музыку" уже не метафора, а **новый workflow**.
Без ограничений. Без посредников. Без лагов. У тебя на GPU.
Хэштеги:
**#GenerativeAI #AIinMusic #OpenSourceMusic #TextToAudio #AIProducer #RealtimeAudioAI #DAWnextgen #LLMsound #AceStep **#aitools

соус: bastyon.com/post?s=874313bd495

Replied to Ivan Todorov

@ivantodorov
The main problem is, most aliases don't work in lack of pronounciation, especially with non english speaker pronounciation or using your "supported" native language.

It is very fantastic, when speech to text rights down, what it has interpreted one has said.

My wife laughed me out after 15 min talking to myself, "hey, whats the weather here" and "dude, turn on the light". In these 15 min was time enough to buy bread, come home again and manually turn on the light.

Once I had planned to enable Voice Assistant for our toddler when it couldn't reach the switch. But finally the sentences were not recongnized.

So, nowadays we have movement detection sensors, managing the lights. The system is now smarter without talking to it.

Apparently, the #apocalypse has arrived and your savior is a USB stick crammed with half-baked local LLMs! 🌍💻 Let's all gather 'round and watch this glorified spreadsheet try to outdo Wikipedia—because who needs complete information when you can have a mini, glitchy encyclopedia on your laptop? 🤖📚
evanhahn.com/local-llms-versus #USBstick #LLMs #glitchyencyclopedia #localAI #HackerNews #ngated

evanhahn.com · Local LLMs versus offline WikipediaHow do the sizes of local LLMs compare to the size of offline Wikipedia downloads?
Replied in thread

@LokiTheCat it allows people to share their docs into the federation and then other people add stuff and it becomes the corpus of the sector - osint/comp intel, indexes, specialized insular industry info but all these people are working together for the greater good - you can run bigger models and they can be trained on docs specific to your sector so you get better answers #semantic tags #metadata #filtered lists #bloomberg termial killer #rag pipelines #trends #real time dashboards #distributed inference #localai

🛡️ Your AI evolution, YOUR data. Heredity OS puts privacy first! 🧬
Unlike big tech AI that harvests your data, Heredity OS keeps genetic algorithms & lineage tracking completely local. Zero telemetry, zero tracking, zero corporate surveillance.
✨ Privacy features:

Cryptographic lineage protection
Local-only AI processing
No cloud dependencies
Hardened against data leaks

Replied in thread

@Catvalente

Or just use you AI locally 🦾 💻 🧠

I completely understand the concerns about relying too heavily on AI, especially cloud-based, centralized models like ChatGPT. The issues of privacy, energy consumption, and the potential for misuse are very real and valid. However, I believe there's a middle ground that allows us to benefit from the advantages of AI without compromising our values or autonomy.

Instead of rejecting AI outright, we can opt for open-source models that run on local hardware. I've been using local language models (LLMs) on my own hardware. This approach offers several benefits:

- Privacy - By running models locally, we can ensure that our data stays within our control and isn't sent to third-party servers.

- Transparency - Open-source models allow us to understand how the AI works, making it easier to identify and correct biases or errors.

- Customization - Local models can be tailored to our specific needs, whether it's for accessibility, learning, or creative projects.

- Energy Efficiency - Local processing can be more energy-efficient than relying on large, centralized data centers.

- Empowerment - Using AI as a tool to augment our own abilities, rather than replacing them, can help us learn and grow. It's about leveraging technology to enhance our human potential, not diminish it.

For example, I use local LLMs for tasks like proofreading, transcribing audio, and even generating image descriptions. Instead of ChatGPT and Grok, I utilize Jan.ai with Mistral, Llama, OpenCoder, Qwen3, R1, WhisperAI, and Piper. These tools help me be more productive and creative, but they don't replace my own thinking or decision-making.

It's also crucial to advocate for policies and practices that ensure AI is used ethically and responsibly. This includes pushing back against government overreach and corporate misuse, as well as supporting initiatives that promote open-source and accessible technologies.

In conclusion, while it's important to be critical of AI and its potential downsides, I believe that a balanced, thoughtful approach can allow us to harness its benefits without sacrificing our values. Let's choose to be informed, engaged, and proactive in shaping the future of AI.

CC: @Catvalente @audubonballroon
@calsnoboarder @craigduncan