Thank you for your interest in contributing to AIKit! This guide will help you set up your development environment and understand the development workflow.
Before you begin, ensure you have the following installed on your development machine:
-
Go: Version 1.24.4 or later
- Install from golang.org
- Verify installation:
go version
-
Docker: Required for building and testing model images
- Install from docker.com
- Verify installation:
docker --version - Ensure Docker daemon is running
-
Git: For version control
- Most systems have this pre-installed
- Verify installation:
git --version
-
golangci-lint: For code linting
- Install:
go install github.com/golangci/golangci-lint/cmd/golangci-lint@latest - Note: The project uses golangci-lint v2 configuration
- Install:
-
pre-commit: For automated code quality checks
- Install:
pip install pre-commitorbrew install pre-commit - Setup:
pre-commit install(after cloning the repository)
- Install:
git clone https://github.com/sozercan/aikit.git
cd aikitgo mod download
go mod verifypre-commit installThis will automatically run linting and formatting checks before each commit.
make build-aikitThis creates a Docker image with the AIKit binary. You can customize the build with:
# Build with custom registry and tag
make build-aikit REGISTRY=myregistry TAG=mytag
# Build with custom output type
make build-aikit OUTPUT_TYPE=type=registryNote: If you encounter TLS certificate issues during Docker builds (e.g., in sandboxed environments), ensure your Go proxy and Docker environment have proper network access and certificate trust chains configured.
make build-test-modelThis builds a test model using the default configuration (test/aikitfile-llama.yaml). You can specify a different configuration:
make build-test-model TEST_FILE=test/aikitfile-phi3.yamlmake testThis runs all unit tests with race detection and generates a coverage report.
After building a test model, you can run it locally:
# CPU-only
make run-test-model
# With GPU support (requires NVIDIA Docker runtime)
make run-test-model-gpu
# Apple Silicon (experimental, requires Podman)
make run-test-model-applesiliconThe model will be available at http://localhost:8080. You can test it by:
- Web UI: Navigate to
http://localhost:8080/chat - API: Send requests to the OpenAI-compatible endpoint:
curl http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "llama-3.1-8b-instruct",
"messages": [{"role": "user", "content": "Hello, how are you?"}]
}'# Install golangci-lint v2 (if not already installed)
go install github.com/golangci/golangci-lint/cmd/golangci-lint@latest
# Run linting
export PATH="$(go env GOPATH)/bin:$PATH"
golangci-lint run -v ./... --timeout 5mNote: The project uses golangci-lint v2 configuration. Ensure you have the correct version installed.
The project follows standard Go conventions:
- Use
gofmtfor formatting (automatically handled by the linter) - Follow effective Go guidelines
- Write tests for new functionality
- Add appropriate documentation for exported functions and types
git checkout -b feature/your-feature-name- Write code following the project's style guidelines
- Add tests for new functionality
- Update documentation as needed
# Run unit tests
make test
# Build and test a model locally
make build-test-model
make run-test-model
# Run linting
golangci-lint run -v ./... --timeout 5mIf you have pre-commit hooks installed, they will automatically run. Otherwise, ensure your code passes linting before committing:
git add .
git commit -m "feat: add your feature description"git push origin feature/your-feature-nameThen create a pull request through the GitHub interface.
AIKit supports various model configurations. Test files are located in the test/ directory:
aikitfile-llama.yaml: GGUF model (default)aikitfile-llama-cuda.yaml: CUDA-enabled GGUF modelaikitfile-hf.yaml: Hugging Face modelaikitfile-unsloth.yaml: Fine-tuning configurationaikitfile-diffusers.yaml: Diffusion model for image generation
To test a specific configuration:
make build-test-model TEST_FILE=test/aikitfile-hf.yaml
make run-test-modelmake build-test-model PLATFORMS=linux/amd64,linux/arm64Ensure you have NVIDIA Docker runtime installed:
make build-test-model RUNTIME=cuda
make run-test-model-gpuUse Podman with GPU acceleration:
make run-test-model-applesiliconcmd/: Command-line interface codepkg/: Core library codeaikit/config/: Configuration parsingaikit2llb/: BuildKit LLB conversionbuild/: Build logic and validationutils/: Utility functions
test/: Test configurations and fixturesmodels/: Model-specific configurationscharts/: Kubernetes Helm chartswebsite/: Documentation website (Docusaurus)
- Check existing Issues for known problems
- Review the Documentation for detailed usage instructions
- Create a new issue if you encounter problems or have questions
AIKit uses semantic versioning. Version information is managed in:
Makefile: Update theVERSIONvariablecharts/aikit/Chart.yaml: UpdateversionandappVersion
The release process is automated through GitHub Actions.
Thank you for contributing to AIKit! 🚀