Running a Local AI Inside Obsidian with Ollama
What if your Obsidian notes could think with you, without sending a single word to the cloud? This article shows you how to build a fully local AI setup inside Obsidian using Ollama and the open Gemma 3 model. You’ll learn why local AI makes sense for knowledge work, how to install and test Ollama from the command line, and how to connect it to Obsidian using community plugins for a seamless writing experience.
By the end of this tutorial, you’ll have a private AI assistant that can rewrite text, summarize notes, brainstorm ideas, and help draft content all while keeping full control over your data. This setup combines Obsidian’s local-first philosophy with Ollama’s open source AI capabilities to create a knowledge workspace that’s private, hackable, and built to last, proving that powerful tools don’t have to come at the cost of control.