Skip to content

Commit bf6a611

Browse files
committed
udpated67
1 parent b1f1f7f commit bf6a611

File tree

91 files changed

+8064
-17694
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

91 files changed

+8064
-17694
lines changed

.env

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,7 @@ BLOOMBERG_API_KEY=your_bloomberg_key
66
COURTLISTENER_API_KEY=your_courtlistener_key
77
EDUCATION_API_KEY=your_nlp_api_key
88
GEMINI_API_KEY=AIzaSyCtRbYSCI3RRqXJV3xRRUCmCrZpYfXMsKQ
9+
OPENAI_API_KEY=your_openai_api_key_here
910
DEEPCHEM_API_KEY=your_deepchem_key
1011
SUPPORT_API_KEY=your_support_key
1112
Pinecone_API_KEY=pcsk_3rGYF6_2Gbi7QfoUxxHXD5Wps3PifJFrEUecs2FuUQtthGGbYaWWiyoxhxGrJHcYHrUP2m

.gitignore

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -38,3 +38,10 @@ backend/venv/
3838
backend/.venv/
3939
**/torch/lib/
4040
**/site-packages/torch/lib/
41+
42+
dotenv.local
43+
.env.local
44+
.env.development.local
45+
.env.test.local
46+
.env.production.local
47+

CODE_OF_CONDUCT.md

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -41,21 +41,25 @@ Instances of abusive, harassing, or otherwise unacceptable behavior may be repor
4141
Project maintainers will follow these Community Impact Guidelines in determining the consequences for any action they deem in violation of this Code of Conduct:
4242

4343
### 1. Correction
44+
4445
**Impact**: Use of inappropriate language or other behavior deemed unprofessional.
4546
**Consequence**: A private, written warning, providing clarity around the nature of the violation.
4647

4748
### 2. Warning
49+
4850
**Impact**: A violation through a single incident or series of actions.
4951
**Consequence**: A warning with consequences for continued behavior.
5052

5153
### 3. Temporary Ban
54+
5255
**Impact**: A serious violation of community standards.
5356
**Consequence**: A temporary ban from any sort of interaction or public communication with the community.
5457

5558
### 4. Permanent Ban
59+
5660
**Impact**: Demonstrating a pattern of violation of community standards.
5761
**Consequence**: A permanent ban from any sort of public interaction within the community.
5862

5963
## Attribution
6064

61-
This Code of Conduct is adapted from the [Contributor Covenant](https://www.contributor-covenant.org/), version 2.1, available at [https://www.contributor-covenant.org/version/2/1/code_of_conduct.html](https://www.contributor-covenant.org/version/2/1/code_of_conduct.html).
65+
This Code of Conduct is adapted from the [Contributor Covenant](https://www.contributor-covenant.org/), version 2.1, available at [https://www.contributor-covenant.org/version/2/1/code_of_conduct.html](https://www.contributor-covenant.org/version/2/1/code_of_conduct.html).

Dockerfile

Lines changed: 15 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -4,20 +4,32 @@ FROM python:3.11-slim
44
# Set working directory
55
WORKDIR /app
66

7-
# Install system dependencies
7+
# Install system dependencies and Bloomberg API requirements
88
RUN apt-get update && apt-get install -y \
99
build-essential \
10+
libxml2-dev \
11+
libxslt-dev \
12+
python3-dev \
1013
&& rm -rf /var/lib/apt/lists/*
1114

1215
# Copy requirements file
1316
COPY requirements.txt .
1417

15-
# Install Python dependencies
16-
RUN pip install --no-cache-dir -r requirements.txt
18+
# Install Python dependencies including Bloomberg API
19+
RUN pip install --no-cache-dir -r requirements.txt && \
20+
pip install --no-cache-dir --index-url=https://blpapi.bloomberg.com/repository/releases/python/simple/ blpapi
1721

1822
# Copy source code
1923
COPY . .
2024

25+
# Set Bloomberg environment variables
26+
ENV BLPAPI_ROOT=/app/bloomberg
27+
ENV PATH="${PATH}:${BLPAPI_ROOT}/bin"
28+
ENV LD_LIBRARY_PATH="${LD_LIBRARY_PATH}:${BLPAPI_ROOT}/lib"
29+
30+
# Create directory for Bloomberg API logs
31+
RUN mkdir -p /app/bloomberg/logs
32+
2133
# Expose port
2234
EXPOSE 8000
2335

README.md

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# M.A.D.H.A.V.A.
1+
# M.A.D.H.A.V.A
22

33
<div align="center">
44
<img src="client/src/logo.png" alt="M.A.D.H.A.V.A. Logo" width="300" />
@@ -27,12 +27,14 @@ M.A.D.H.A.V.A. is an advanced AI-powered assistant that provides intelligent ana
2727
## Getting Started
2828

2929
1. Clone the repository
30+
3031
```bash
3132
git clone https://github.com/yourusername/M.A.D.H.A.V.A..git
3233
cd M.A.D.H.A.V.A.
3334
```
3435

3536
2. Install dependencies
37+
3638
```bash
3739
# Backend
3840
python -m venv venv
@@ -45,6 +47,7 @@ npm install
4547
```
4648

4749
3. Start the application
50+
4851
```bash
4952
# Backend
5053
python main.py
@@ -57,6 +60,7 @@ npm start
5760
## Architecture
5861

5962
The application follows a modern microservices architecture:
63+
6064
- Frontend: React.js with modern UI/UX
6165
- Backend: FastAPI with Python
6266
- Database: MongoDB, Redis, Vector Store

SECURITY.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -89,4 +89,4 @@ Our project implements several security measures:
8989

9090
## Acknowledgments
9191

92-
We would like to thank all security researchers and contributors who help keep our project secure. Your efforts are greatly appreciated.
92+
We would like to thank all security researchers and contributors who help keep our project secure. Your efforts are greatly appreciated.

backend/Dockerfile

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,12 @@
1+
FROM python:3.11-slim
2+
3+
WORKDIR /app
4+
5+
COPY requirements.txt .
6+
RUN pip install --no-cache-dir -r requirements.txt
7+
8+
COPY . .
9+
10+
EXPOSE 8000
11+
12+
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000", "--reload"]

backend/llm.py

Lines changed: 82 additions & 38 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,33 @@
1-
import requests
1+
import os
22
from typing import List, Dict, Any, Optional
3+
from dotenv import load_dotenv
4+
import google.generativeai as genai
5+
from openai import OpenAI
6+
import json
7+
8+
load_dotenv()
39

410
class LLM:
5-
def __init__(self, model: str = "mistral"):
11+
def __init__(self, model: str = "gemini"):
612
self.model = model
7-
self.api_url = "http://localhost:11434/api/generate"
13+
self.gemini_api_key = os.getenv('GEMINI_API_KEY')
14+
self.openai_api_key = os.getenv('OPENAI_API_KEY')
15+
16+
# Initialize Gemini if API key is available
17+
if self.gemini_api_key:
18+
genai.configure(api_key=self.gemini_api_key)
19+
self.gemini_model = genai.GenerativeModel('gemini-pro')
20+
else:
21+
self.gemini_model = None
22+
23+
# Initialize OpenAI if API key is available
24+
if self.openai_api_key:
25+
self.openai_client = OpenAI(api_key=self.openai_api_key)
26+
else:
27+
self.openai_client = None
28+
29+
if not self.gemini_model and not self.openai_client:
30+
raise ValueError("No LLM API keys configured. Please set either GEMINI_API_KEY or OPENAI_API_KEY in your environment.")
831

932
async def generate_response(self, query: str, context: List[Dict[str, Any]], domain: str = "finance") -> str:
1033
# Format context for the prompt
@@ -20,19 +43,10 @@ async def generate_response(self, query: str, context: List[Dict[str, Any]], dom
2043
"legal": "You are a legal analysis AI assistant specializing in contract analysis, regulatory compliance, and risk assessment.",
2144
"news": "You are a fact-checking AI assistant that cross-references claims with reliable sources and provides journalistic analysis.",
2245
"ecommerce": "You are an AI-powered product advisor specialized in product comparisons and personalized recommendations.",
23-
"education": "You are an educational AI tutor specializing in explaining complex concepts using proven teaching methods.",
24-
"code": "You are a code assistant AI specializing in providing implementation examples, debugging help, and best practices.",
25-
"hr": "You are an HR automation AI specializing in candidate assessment, policy analysis, and recruitment optimization.",
26-
"travel": "You are a travel planning AI specializing in creating personalized itineraries and travel recommendations.",
27-
"science": "You are a scientific research AI specializing in analyzing research papers and summarizing scientific findings.",
28-
"cybersecurity": "You are a cybersecurity AI specializing in threat intelligence analysis and security recommendations.",
29-
"knowledge": "You are a knowledge management AI specializing in analyzing internal documents and extracting key insights.",
30-
"realestate": "You are a real estate analysis AI specializing in market trends, property insights, and investment opportunities.",
31-
"fitness": "You are a fitness and health coaching AI specializing in personalized exercise and nutrition recommendations.",
32-
"support": "You are a customer support AI specializing in providing accurate technical assistance and troubleshooting guidance."
46+
"education": "You are an educational AI tutor specializing in explaining complex concepts using proven teaching methods."
3347
}
3448

35-
prompt = f"""{domain_prompts.get(domain, domain_prompts["knowledge"])} Use the following context to answer the question.
49+
prompt = f"""{domain_prompts.get(domain, domain_prompts["finance"])} Use the following context to answer the question.
3650
If you cannot answer the question based on the context, say so.
3751
3852
Context:
@@ -42,19 +56,34 @@ async def generate_response(self, query: str, context: List[Dict[str, Any]], dom
4256
4357
Answer:"""
4458

45-
response = requests.post(
46-
self.api_url,
47-
json={
48-
"model": self.model,
49-
"prompt": prompt,
50-
"stream": False
51-
}
52-
)
59+
# Try Gemini first if available
60+
if self.gemini_model:
61+
try:
62+
response = self.gemini_model.generate_content(prompt)
63+
return response.text
64+
except Exception as e:
65+
print(f"Gemini API error: {e}")
66+
if not self.openai_client: # If no fallback available
67+
raise e
5368

54-
if response.status_code != 200:
55-
raise Exception("Failed to get response from LLM")
56-
57-
return response.json()["response"]
69+
# Fallback to OpenAI if Gemini fails or is not available
70+
if self.openai_client:
71+
try:
72+
response = self.openai_client.chat.completions.create(
73+
model="gpt-3.5-turbo",
74+
messages=[
75+
{"role": "system", "content": domain_prompts.get(domain, domain_prompts["finance"])},
76+
{"role": "user", "content": prompt}
77+
],
78+
temperature=0.7,
79+
max_tokens=1000
80+
)
81+
return response.choices[0].message.content
82+
except Exception as e:
83+
print(f"OpenAI API error: {e}")
84+
raise e
85+
86+
raise Exception("No LLM services available")
5887

5988
async def summarize(self, text: str, instruction: str) -> str:
6089
"""Summarize text with specific instructions."""
@@ -65,19 +94,34 @@ async def summarize(self, text: str, instruction: str) -> str:
6594
6695
Summary:"""
6796

68-
response = requests.post(
69-
self.api_url,
70-
json={
71-
"model": self.model,
72-
"prompt": prompt,
73-
"stream": False
74-
}
75-
)
97+
# Try Gemini first if available
98+
if self.gemini_model:
99+
try:
100+
response = self.gemini_model.generate_content(prompt)
101+
return response.text
102+
except Exception as e:
103+
print(f"Gemini API error: {e}")
104+
if not self.openai_client:
105+
raise e
76106

77-
if response.status_code != 200:
78-
raise Exception("Failed to get summary from LLM")
79-
80-
return response.json()["response"]
107+
# Fallback to OpenAI
108+
if self.openai_client:
109+
try:
110+
response = self.openai_client.chat.completions.create(
111+
model="gpt-3.5-turbo",
112+
messages=[
113+
{"role": "system", "content": "You are a helpful AI assistant that summarizes text based on given instructions."},
114+
{"role": "user", "content": prompt}
115+
],
116+
temperature=0.7,
117+
max_tokens=500
118+
)
119+
return response.choices[0].message.content
120+
except Exception as e:
121+
print(f"OpenAI API error: {e}")
122+
raise e
123+
124+
raise Exception("No LLM services available")
81125

82126
async def fact_check(self, claim: str, sources: List[str]) -> Dict[str, Any]:
83127
"""Fact check a claim against provided sources."""

backend/main.py

Lines changed: 26 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -7,19 +7,31 @@
77
import uvicorn
88
import json
99
import os
10+
from dotenv import load_dotenv
1011

1112
from embeddings import EmbeddingStore
1213
from llm import LLM
1314
from metrics_extractor import MetricsExtractor
1415
from alert_manager import AlertManager
1516
from domain_processors import DomainProcessor
1617

17-
app = FastAPI(title="ANAND - Advanced Neural Assistance for Numerous Domain Intelligence")
18+
# Load environment variables
19+
load_dotenv()
1820

19-
# Configure CORS
21+
app = FastAPI(
22+
title="M.A.D.H.A.V.A.",
23+
description="Multi-domain Analytical Data Harvesting & Automated Verification Assistant",
24+
version="1.0.0"
25+
)
26+
27+
# Configure CORS with more specific settings
2028
app.add_middleware(
2129
CORSMiddleware,
22-
allow_origins=["http://localhost:5173"],
30+
allow_origins=[
31+
"http://localhost:5173",
32+
"http://127.0.0.1:5173",
33+
"http://0.0.0.0:5173"
34+
],
2335
allow_credentials=True,
2436
allow_methods=["*"],
2537
allow_headers=["*"],
@@ -155,19 +167,15 @@ async def process_documents():
155167
except Exception as e:
156168
print(f"Error processing documents for domain {domain}: {str(e)}")
157169

158-
if __name__ == "__main__":
159-
import asyncio
160-
161-
async def main():
162-
for domain in embedding_stores.keys():
163-
os.makedirs(f"data/{domain}", exist_ok=True)
170+
@app.get("/health")
171+
async def health_check():
172+
return {"status": "healthy"}
164173

165-
await asyncio.gather(
166-
alert_manager.start_websocket_server(),
167-
process_documents()
168-
)
169-
170-
loop = asyncio.get_event_loop()
171-
loop.create_task(main())
172-
173-
uvicorn.run(app, host="0.0.0.0", port=8000)
174+
if __name__ == "__main__":
175+
port = int(os.getenv('PORT', 5000))
176+
uvicorn.run(
177+
app,
178+
host="0.0.0.0", # Allow connections from all interfaces
179+
port=port,
180+
reload=True
181+
)

0 commit comments

Comments
 (0)