Welcome to kubectllama! 🐾 The AI-powered CLI tool that takes your Kubernetes management to the next level by allowing you to run kubectl
commands through natural language. 🎉 Say goodbye to memorizing complex kubectl
commands and let kubectllama handle it for you! 🤖✨
- 🗣️ Natural Language Processing: Simply type commands like "Get all pods in the default namespace" and let kubectllama do the magic.
- ⚡ Fast & Efficient: Get complex
kubectl
commands with minimal effort and increased productivity. - 🔒 Safe & Secure: Your AI assistant lives locally on your machine, ensuring your commands are processed securely.
- 💬 Confirmation Step: the cli doesn't execute any command, kubectllama will only display the suggested command so to prevent unwanted actions.
kubectllama
can be installed either by downloading a pre-built executable from GitHub Releases or by cloning the repository and building from source. Below are instructions for both methods.
- Ollama: You need Ollama installed and running locally (default URL:
http://localhost:11434
). Download it from Ollama's website.- Model: By default,
kubectllama
uses themistral
model because it offers the best tradeoff between speed and precision. However, you can use any Ollama model by specifying it with the--model
flag (e.g.,--model llama3
). Pull your chosen model:Or, for a different model:ollama pull mistral # Default model
ollama pull <model-name>
- Model: By default,
- Go: Required only for building from source (version 1.21+).
Pre-built binaries are available for Linux, macOS, and Windows from the GitHub Releases page. Since the repository is public, no authentication is needed.
curl -L -o kubectllama \
https://github.com/jazzshu/kubectllama/releases/latest/download/kubectllama-linux-amd64
chmod +x kubectllama
sudo mv kubectllama /usr/local/bin/
curl -L -o kubectllama \
https://github.com/jazzshu/kubectllama/releases/latest/download/kubectllama-macos-amd64
chmod +x kubectllama
sudo mv kubectllama /usr/local/bin/
- Download kubectllama-windows-amd64.exe from the latest release.
- Move it to a directory in your PATH (e.g., C:\Windows\System32) using File Explorer or:
move kubectllama-windows-amd64.exe C:\Windows\System32\kubectllama.exe
kubectllama --help
If you prefer to build kubectllama yourself or want to modify the code:
- Clone the repository:
git clone https://github.com/jazzshu/kubectllama.git
cd kubectllama
- Build the binary:
go build -o kubectllama .
- Install it:
sudo mv kubectllama /usr/local/bin
chmod +x /usr/local/bin/kubectllama
- For Windows, move it to a PATH directory:
move kubectllama.exe C:\Windows\System32\
kubectllama --help
After installation, run kubectllama
with a natural language request:
kubectllama get pods running in the test namespace
Output (using default mistral
model):
--> kubectl get pods -n test
To use a different model, specify it with the --model
flag:
kubectllama --model llama3 get pods running in the test namespace
If Ollama is running a different host from the default one, you can specify it with the --url
flag:
kubectllama --url http://my-ollama-custom-url:8080 get pods running in test namespace