techhub.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A hub primarily for passionate technologists, but everyone is welcome

Administered by:

Server stats:

4.6K
active users

#llmapplications

0 posts0 participants0 posts today
Hacker News<p>12-factor Agents: Patterns of reliable LLM applications</p><p><a href="https://github.com/humanlayer/12-factor-agents" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">github.com/humanlayer/12-facto</span><span class="invisible">r-agents</span></a></p><p><a href="https://mastodon.social/tags/HackerNews" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HackerNews</span></a> <a href="https://mastodon.social/tags/12factorAgents" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>12factorAgents</span></a> <a href="https://mastodon.social/tags/LLMapplications" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLMapplications</span></a> <a href="https://mastodon.social/tags/ReliablePatterns" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ReliablePatterns</span></a> <a href="https://mastodon.social/tags/AIdevelopment" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIdevelopment</span></a> <a href="https://mastodon.social/tags/SoftwareEngineering" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SoftwareEngineering</span></a></p>
Sarah Lea<p>What happens when a language model solves maths problems?</p><p>&quot;If I’m 4 years old and my partner is 3x my age – how old is my partner when I’m 20?&quot;<br />Do you know the answer?</p><p>🤥 An older Llama model (by Meta) said 23.<br />🤓 A newer Llama model said 28 – correct.</p><p>So what made the difference?</p><p>Today I kicked off the 5-day Kaggle Generative AI Challenge.<br />Day 1: Fundamentals of LLMs, prompt engineering &amp; more.</p><p>Three highlights from the session:<br />☕ Chain-of-Thought Prompting<br />→ Models that &quot;think&quot; step by step tend to produce more accurate answers. Sounds simple – but just look at the screenshots...</p><p>☕ Parameters like temperature and top_p<br />→ Try this on together.ai: Prompt a model with “Suggest 5 colors” – once with temperature 0 and once with 2.<br />Notice the difference?</p><p>☕ Zero-shot, One-shot, Few-shot prompting<br />→ The more examples you provide, the better the model understands what you want.</p><p><a href="https://techhub.social/tags/PromptEngineering" class="mention hashtag" rel="tag">#<span>PromptEngineering</span></a> <a href="https://techhub.social/tags/GenerativeAI" class="mention hashtag" rel="tag">#<span>GenerativeAI</span></a> <a href="https://techhub.social/tags/LLM" class="mention hashtag" rel="tag">#<span>LLM</span></a> <a href="https://techhub.social/tags/Kaggle" class="mention hashtag" rel="tag">#<span>Kaggle</span></a> <a href="https://techhub.social/tags/LLMApplications" class="mention hashtag" rel="tag">#<span>LLMApplications</span></a> <a href="https://techhub.social/tags/AI" class="mention hashtag" rel="tag">#<span>AI</span></a> <a href="https://techhub.social/tags/DataScience" class="mention hashtag" rel="tag">#<span>DataScience</span></a> <a href="https://techhub.social/tags/Google" class="mention hashtag" rel="tag">#<span>Google</span></a> <a href="https://techhub.social/tags/Python" class="mention hashtag" rel="tag">#<span>Python</span></a> <a href="https://techhub.social/tags/Tech" class="mention hashtag" rel="tag">#<span>Tech</span></a></p>