Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
58 changes: 34 additions & 24 deletions .github/copilot-instructions.md
Original file line number Diff line number Diff line change
@@ -1,40 +1,41 @@
# GitHub Copilot Instructions

> **Token Efficiency Note**: This is a minimal pointer file (~500 tokens, auto-loaded by Copilot).
> For complete operational details, reference: `#file:AGENTS.md` (~2,500 tokens, loaded on-demand)
> For specialized knowledge, use: `#file:SKILLS/<skill-name>/SKILL.md` (loaded on-demand when needed)
> **Token Budget**: Target 600, limit 650 (auto-loaded)
> Details: `#file:AGENTS.md` (~2,550 tokens, on-demand)
> Skills: `#file:SKILLS/<name>/SKILL.md` (on-demand)

## 🎯 Quick Context
## Quick Context

**Project**: ASP.NET Core 8 REST API demonstrating layered architecture patterns
**Stack**: .NET 8 (LTS) • EF Core 9SQLiteDocker xUnit
**Pattern**: Repository + Service Layer + AutoMapper + FluentValidation
**Philosophy**: Learning-focused PoC emphasizing clarity and best practices
ASP.NET Core 8 REST API with layered architecture
**Stack**: .NET 8 LTS, EF Core 9, SQLite, Docker, xUnit
**Pattern**: Repository + Service + AutoMapper + FluentValidation
**Focus**: Learning PoC emphasizing clarity and best practices

## 📐 Core Conventions
## Core Conventions

- **Naming**: PascalCase (public), camelCase (private)
- **DI**: Primary constructors everywhere
- **Async**: All I/O operations use async/await
- **Logging**: Serilog with structured logging
- **Testing**: xUnit + Moq + FluentAssertions
- **Formatting**: CSharpier (opinionated)
- **Formatting**: CSharpier
- **Commits**: Subject ≤80 chars, include issue number (#123), body lines ≤80 chars, conventional commits

## 🏗️ Architecture at a Glance
## Architecture

```text
Controller → Service → Repository → Database
↓ ↓
Validation Caching
```

- **Controllers**: Minimal logic, delegate to services
- **Services**: Business logic + caching with `IMemoryCache`
- **Repositories**: Generic `Repository<T>` + specific implementations
- **Models**: `Player` entity + Request/Response DTOs
- **Validators**: FluentValidation for input structure (business rules in services)
Controllers: Minimal logic, delegate to services
Services: Business logic + `IMemoryCache` caching
Repositories: Generic `Repository<T>` + specific implementations
Models: `Player` entity + DTOs
Validators: FluentValidation (structure only, business rules in services)

## Copilot Should
## Copilot Should

- Generate idiomatic ASP.NET Core code with minimal controller logic
- Use EF Core async APIs with `AsNoTracking()` for reads
Expand All @@ -44,14 +45,14 @@ Validation Caching
- Use primary constructors for DI
- Implement structured logging with `ILogger<T>`

## 🚫 Copilot Should Avoid
## Copilot Should Avoid

- Synchronous EF Core APIs
- Controller business logic (belongs in services)
- Static service/repository classes
- `ConfigureAwait(false)` (unnecessary in ASP.NET Core)

## Quick Commands
## Quick Commands

```bash
# Run with hot reload
Expand All @@ -66,12 +67,21 @@ docker compose up
# Swagger: https://localhost:9000/swagger
```

## 📚 Need More Detail?
## Load On-Demand Files

**For operational procedures**: Load `#file:AGENTS.md`
**For Docker expertise**: *(Planned)* `#file:SKILLS/docker-containerization/SKILL.md`
**For testing patterns**: *(Planned)* `#file:SKILLS/testing-patterns/SKILL.md`
**Load `#file:AGENTS.md` when:**
- "How do I run tests with coverage?"
- "CI/CD pipeline setup or troubleshooting"
- "Database migration procedures"
- "Publishing/deployment workflows"
- "Detailed troubleshooting guides"

**Load `#file:SKILLS/<skill-name>/SKILL.md` (planned):**
- Docker optimization: `docker-containerization/SKILL.md`
- Testing patterns: `testing-patterns/SKILL.md`

**Human-readable overview**: See `README.md` (not auto-loaded)

---

💡 **Why this structure?** Copilot auto-loads this file on every chat (~500 tokens). Loading `AGENTS.md` or `SKILLS/` explicitly gives you deep context only when needed, saving 80% of your token budget!
**Why this structure?** Base instructions (~600 tokens) load automatically. On-demand files (~2,550 tokens) load only when needed, saving 80% of tokens per chat.
8 changes: 4 additions & 4 deletions AGENTS.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
# AGENTS.md

> **Token Efficiency Note**: This file contains complete operational instructions (~2,500 tokens).
> **Auto-loaded**: NO (load explicitly with `#file:AGENTS.md` when you need detailed procedures)
> **When to load**: Complex workflows, troubleshooting, CI/CD setup, detailed architecture questions
> **Related files**: See `#file:.github/copilot-instructions.md` for quick context (auto-loaded, ~500 tokens)
> **Token Efficiency**: Complete operational instructions (~2,550 tokens).
> **Auto-loaded**: NO (load explicitly with `#file:AGENTS.md` when needed)
> **When to load**: Complex workflows, troubleshooting, CI/CD setup, detailed architecture
> **Related files**: `#file:.github/copilot-instructions.md` (auto-loaded, ~650 tokens)

---

Expand Down
165 changes: 165 additions & 0 deletions scripts/count-tokens.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,165 @@
#!/bin/bash
# 📊 Token Counter for Copilot Instruction Files
# Uses tiktoken (OpenAI's tokenizer) for accurate counting
# Approximation: ~0.75 words per token (English text)

set -e

echo "📊 Token Analysis for Copilot Instructions"
echo "=========================================="
echo ""

# Check if tiktoken is available
if command -v python3 &> /dev/null; then
# Try to use tiktoken for accurate counting
if python3 -c "import tiktoken" 2>/dev/null; then
echo "✅ Using tiktoken (accurate Claude/GPT tokenization)"
echo ""
else
# tiktoken not found - offer to install
echo "⚠️ tiktoken not installed"
echo ""

# Detect non-interactive environment (CI/CD)
if [ ! -t 0 ] || [ -n "$CI" ] || [ -n "$CI_CD" ]; then
echo "🤖 Non-interactive environment detected (CI/CD)"
echo "📝 Using word-based approximation"
echo " (To auto-install in CI, set AUTO_INSTALL_TIKTOKEN=1)"
echo ""
USE_APPROX=1
elif [ -n "$AUTO_INSTALL_TIKTOKEN" ]; then
echo "📥 Installing tiktoken (AUTO_INSTALL_TIKTOKEN=1)..."
if pip3 install tiktoken --quiet; then
echo "✅ tiktoken installed successfully!"
echo ""
# Re-run the script after installation
exec "$0" "$@"
else
echo "❌ Installation failed. Using word-based approximation instead."
echo ""
USE_APPROX=1
fi
else
echo "tiktoken provides accurate token counting for Claude/GPT models."
read -p "📦 Install tiktoken now? (y/n): " -n 1 -r
echo ""
if [[ $REPLY =~ ^[Yy]$ ]]; then
echo "📥 Installing tiktoken..."
if pip3 install tiktoken --quiet; then
echo "✅ tiktoken installed successfully!"
echo ""
# Re-run the script after installation
exec "$0" "$@"
else
echo "❌ Installation failed. Using word-based approximation instead."
echo ""
USE_APPROX=1
fi
else
echo "📝 Using word-based approximation instead"
echo " (Install manually: pip3 install tiktoken)"
echo ""
USE_APPROX=1
fi
fi
fi

# Only run tiktoken if it's available and we didn't set USE_APPROX
if [ -z "$USE_APPROX" ] && python3 -c "import tiktoken" 2>/dev/null; then

# Create temporary Python script
cat > /tmp/count_tokens.py << 'PYTHON'
import tiktoken
import sys

# cl100k_base is used by GPT-4, Claude uses similar tokenization
encoding = tiktoken.get_encoding("cl100k_base")

file_path = sys.argv[1]
with open(file_path, 'r', encoding='utf-8') as f:
content = f.read()

tokens = encoding.encode(content)
print(len(tokens))
PYTHON

# Count tokens for each file
echo "📄 .github/copilot-instructions.md"
if [ -f ".github/copilot-instructions.md" ]; then
COPILOT_TOKENS=$(python3 /tmp/count_tokens.py .github/copilot-instructions.md 2>&1 | grep -v "ERROR:root:code for hash" | tail -1)
echo " Tokens: $COPILOT_TOKENS"
else
echo " ⚠️ File not found, skipping"
COPILOT_TOKENS=0
fi
echo ""

echo "📄 AGENTS.md"
if [ -f "AGENTS.md" ]; then
AGENTS_TOKENS=$(python3 /tmp/count_tokens.py AGENTS.md 2>&1 | grep -v "ERROR:root:code for hash" | tail -1)
echo " Tokens: $AGENTS_TOKENS"
else
echo " ⚠️ File not found, skipping"
AGENTS_TOKENS=0
fi
echo ""
Comment thread
coderabbitai[bot] marked this conversation as resolved.

# Calculate total
TOTAL=$((COPILOT_TOKENS + AGENTS_TOKENS))
echo "📊 Summary"
echo " Base load (auto): $COPILOT_TOKENS tokens"
echo " On-demand load: $AGENTS_TOKENS tokens"
echo " Total (if both): $TOTAL tokens"
echo ""

# Check against target
TARGET=600
LIMIT=650
if [ $COPILOT_TOKENS -le $TARGET ]; then
echo "✅ copilot-instructions.md within target ($TARGET tokens)"
elif [ $COPILOT_TOKENS -le $LIMIT ]; then
echo "⚠️ copilot-instructions.md over target but within limit ($LIMIT tokens)"
else
echo "❌ copilot-instructions.md exceeds limit! Optimization required."
fi

# Calculate savings (guard against division by zero)
if [ $TOTAL -gt 0 ]; then
SAVINGS=$((AGENTS_TOKENS * 100 / TOTAL))
echo "💡 Savings: ${SAVINGS}% saved when AGENTS.md not needed"
else
echo "💡 Savings: 0% (no tokens to count)"
fi

# Cleanup
rm /tmp/count_tokens.py
fi
else
echo "❌ Python3 not found"
echo " Python 3 is required for token counting"
echo " Install from: https://www.python.org/downloads/"
echo ""
exit 1
fi

# Fallback: word-based approximation
if [ -n "$USE_APPROX" ]; then
echo "📄 .github/copilot-instructions.md"
WORDS=$(wc -w < .github/copilot-instructions.md | tr -d ' ')
APPROX_TOKENS=$((WORDS * 4 / 3))
echo " Words: $WORDS"
echo " Approx tokens: $APPROX_TOKENS"
echo ""

echo "📄 AGENTS.md"
WORDS=$(wc -w < AGENTS.md | tr -d ' ')
APPROX_TOKENS=$((WORDS * 4 / 3))
echo " Words: $WORDS"
echo " Approx tokens: $APPROX_TOKENS"
echo ""

echo "💡 Note: Run script again to install tiktoken for accurate counts"
fi

echo ""
echo "=========================================="