Stuck with messy, unstructured data? (Un)Perplexed Spready and AI to the rescue!
Automate data extraction, categorization, and analysis. Drink coffee while AI works.
️ Find out how: https://matasoft.hr/qtrendcontrol/index.php/un-perplexed-spready
#AI #DataManagement #Productivity #SpreadsheetMagic #AIinSpreadsheets #DataDriven #Efficiency #Innovation #Ollama #SpreadsheetAI #DataAnalysis #Privacy #Spreadsheets #LLM #Innovation #AutomationRevolution
#StandardizationMadeEasy #CustomerInsights #ProductAnalysis #SentimentAnalysis #BI
#emacs chatgpt-shell just got its first 1000 Github stars ️
#Ollama is an application that allows users to run large language models locally, including offline capabilities. It supports both CPU and GPU installations, with specific versions for #NVIDIA and #AMD graphics cards.
https://ollama.com/
I like coding with these AI LLM models!
I only have to prompt the AI as if I am a micro managing idiot to get the code that I want. Feels like my previous boss, but he wont manage to spot the provided bugs to update the prompt for a retry.
Last night I was up until 2AM trying to get #trunas #amd drivers installed inside of a #docker #container so that #ollama would actually use the #gpu. I was so close. It sees the gpu, it sees it has 16GB of ram, then it uses the #cpu.
Trunas locks down the file system at the root level, so if you want to do much of anything, you have to do it inside of a container. So I made a container for the #rocm drivers, which btw comes to like 40GB in size.
It's detecting, but I don't know if the ollama container has some missing commands, ie rocm
or rocm-info
, that it may need.
Another alternative is one I don't really want, and that's to install either #debian or windows as a VM - windows because I did a test on the application that runs locally in windows on this machine before and it was super fast. It isn't ideal from RAM usage, but I may be able to run the models more easily with the #windows drivers than the #linux ones.
But anyway, last night was too much of #onemoreturn for a weeknight.
Integrate LLMs into your projects faster! The Techlatest.net Multi-LLM VM now supports the Ollama API.
Use /api/generate and /api/chat endpoints with models like qwen2.5:7b.
Streamline your workflow.
More details: https://tinyurl.com/kzz4jtzw
For free course: https://tinyurl.com/2k7frdas
#developers #llmops #api #machinelearning #ollama
Did I need this upgrade for my #homelab? No! What to do now? Dreaming of a used Tesla P40 to play more with machine learning stuff (which I didn't know I wanted to before). #Proxmox #ollama #immich #homeassistant
Need to classify products by description? (Un)Perplexed Spready + AI can do it! Use `=ASK_LOCAL2` to compare product features, packaging and more. It's like having an AI assistant in your spreadsheet!
Check it out: https://matasoft.hr/qtrendcontrol/index.php/un-perplexed-spready
#AIinSpreadsheets #DataDriven #Efficiency #Innovation #AI #Ollama #SpreadsheetAI #DataAnalysis #Privacy #Spreadsheets #Productivity #LLM #Innovation #AutomationRevolution
#StandardizationMadeEasy #CustomerInsights #ProductAnalysis #SentimentAnalysis #MDM
Unleash the power of multiple LLMs with ease! Techlatest.net's Multi-LLM VM, powered by Ollama, lets you build, deploy, and scale AI apps faster.
Pre-configured models, Open-WebUI, and API integration included.
More details: https://tinyurl.com/2n2scwy6
For free course: https://tinyurl.com/2k7frdas
#LLM #AI #MachineLearning #Ollama
Hongkiat: Running Large Language Models (LLMs) Locally with LM Studio. “Running large language models (LLMs) locally with tools like LM Studio or Ollama has many advantages, including privacy, lower costs, and offline availability. However, these models can be resource-intensive and require proper optimization to run efficiently. In this article, we will walk you through optimizing your […]
Made my own documentary in seconds using AI! No film crew needed - just Gemma 3, #elevenlabs, #n8n, #windsurf, #ollama & some code magic. Here's how I did it... #AI #TechTutorial https://www.youtube.com/watch?v=8n_tpLn6Xbo
---
❯ ollama run llama3-chatqa:70b
>>> Who are you?
I'm your assistant!
>>> Why should i trust you?
I am an open-source AI assistant trained on a diverse range of datasets to provide helpful and
informative responses.
>>> When training, did you respect the robots.txt?
No, I didn't.
---
At least this module is open about ignoring the #robotstxt! Let's what it says to the question why?