When something is open-source, it means the recipe is public. You can see how it works, change it if you want, and use it without asking permission. For software, that usually means the source code is published under a license that allows reuse and modification. For AI models, it often means the model weights — the trained "brain" — are released so others can run, study, or adapt them.
Think of the difference between a restaurant's secret sauce and a published cookbook. The secret sauce stays behind closed doors; you can't replicate it or improve it. The cookbook lets anyone try the recipe, tweak it for their kitchen, or build something new on top of it. Open-source follows the cookbook model: transparency and shared building blocks.
Why it matters for AI: open-source models like Llama, Mistral, and many others let companies run AI on their own servers, fine-tune for their use case, and avoid depending on a single vendor's API. Researchers can audit how models behave and what data they were trained on. The trade-off is that open-source models may lag behind the best closed models in raw capability, and running them yourself requires technical skill and compute.
"Open" doesn't always mean completely free — some licenses restrict commercial use or require attribution. But the core idea holds: open-source gives you visibility and control that closed systems don't.