techhub.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A hub primarily for passionate technologists, but everyone is welcome

Administered by:

Server stats:

4.9K
active users

#instructlab

0 posts0 participants0 posts today

** #InstructLab Core v0.24.z and v0.23.z minor releases **

A couple of minor releases for Core:
- fix batch sizing for SDG (0.24.2 & 0.23.3)
- fix a missing python package dependency, and error-handling for "ilab taxonomy diff" (0.24.3 & 0.23.4)

Changelogs: github.com/instructlab/instruc

InstructLab Core package.  Use this to chat with a model and execute the InstructLab workflow to train a model using custom taxonomy data. - instructlab/instructlab
GitHubReleases · instructlab/instructlabInstructLab Core package. Use this to chat with a model and execute the InstructLab workflow to train a model using custom taxonomy data. - instructlab/instructlab

I had the pleasure of chatting with Katherine Druckman, host of the #OpenatIntel podcast, at #AllThingsOpen last fall:

"Democratizing AI: Collaborative AI Development with #InstructLab"

openatintel.podbean.com/e/demo

openatintel.podbean.comDemocratizing AI: Collaborative AI Development with InstructLab | Open at IntelIn this episode, we have an insightful discussion with Carol Chen from Red Hat at the All Things Open conference. Carol, who works in the Open Source Program Office at Red Hat, shares her experiences and insights on her ongoing project, InstructLab, a collaboration with IBM aimed at applying open source methods to building and training large language models. The conversation covers the importance of democratizing AI, reducing the fear and misconceptions surrounding AI technology, and making AI tools and concepts more accessible and understandable for everyone, including those who are not tech-savvy. Carol also discusses the social responsibility associated with AI development, emphasizing the need for transparency and community collaboration.   00:00 Introduction and Welcome 00:17 Carol's Background and Role at Red Hat 01:00 AI and Open Source 03:13 Challenges and Opportunities in AI 06:43 InstructLab: Making AI Accessible 12:09 Personal Journey into AI 15:37 AI Ethics and Open Source Resources: Applying Open Source Methods to Building and Training Large Language Models - Carol Chen & JJ Asghar Guest: Carol Chen is a Community Architect at Red Hat, supporting and promoting various upstream communities such as InstructLab, Ansible and ManageIQ. She has been actively involved in open source communities while working for Jolla and Nokia previously. In addition, she also has experiences in software development/integration in her 12 years in the mobile industry. Carol has spoken at events around the world, including DevConf.CZ in Czech Republic and OpenInfra Summit in China. On a personal note, Carol plays the Timpani in an orchestra in Tampere, Finland, where she now calls home.  

#InstructLab at #CfgMgmtCamp

Presentation: "Open Source AI and InstructLab" - JJ Asghar, Feb 4 (Tues), 14:50 - 15:40 in B.Con cfp.cfgmgmtcamp.org/ghent2025/

Workshop: "InstructLab workshop" - Feb 5 (Wed), starting at 13:00 in B.3.013 cfp.cfgmgmtcamp.org/ghent2025/

@cfgmgmtcamp

cfp.cfgmgmtcamp.orgOpen Source AI and InstructLab CfgMgmtCamp 2025 GhentIn a world of fast-moving AI adoption, the big players want you to play with their versions of AI. The problem, though, is that their AI is usually built in a way that is closed off from the eyes of our tech community, with little or no oversight for choices and legal grey areas for usage and adoption. What if I told you there was a way to get the best of both worlds? An AI solution that can be externally verified and trusted legally, and we want you, yes, you, to join us in building a genuinely transparent AI solution. This is what the Granite and Granite-Code foundational models are. You can read the paper on how the model was initially trained and have IBM's lawyers back up claims made from using Granite or Granite-Code usage. Can your other AI providers say that? Will they give you the design documents on how they built it from the ground up? Or will they put their lawyers behind your usage of their AI? Would you put your business at risk of using something like this when the legal area is so grey and ever-changing? But that's only a point in time; you also need to add skills and knowledge to the ever-growing AI system, which is where InstructLab comes into play. During this presentation/workshop, we will be showing you why you should care about Open Source AI, teach you how to leverage a purely Open Source AI for a local "co-pilot" like experience, and then help train the Granite foundational model with new knowledge, giving you the skills to help build a genuinely transparent AI. Join us and learn with us. We want to build a future of transparency and legal protection for AI engineers.

** InstructLab Core v0.23.0 Release **

- ilab data generate -dt allowing you to run commands in the background
- ilab model upload: upload models to HuggingFace, S3, and OCI compatible registries
- ilab rag command group enables RAG in InstructLab! These features are experimental.

Full changelog: github.com/instructlab/instruc

Announcement: groups.google.com/a/instructla

v0.23.0
Breaking Changes

llama-cpp-python has been bumped to 0.3.2. This allows for serving of Granite 3.0 GGUF Models. With this change, some previous handling of context window size has been mod...
GitHubRelease v0.23.0 · instructlab/instructlabv0.23.0 Breaking Changes llama-cpp-python has been bumped to 0.3.2. This allows for serving of Granite 3.0 GGUF Models. With this change, some previous handling of context window size has been mod...