updated readme file to reflect changes

This commit is contained in:
kpcto 2025-02-01 01:07:24 +00:00
parent 23b9022fae
commit 7e646148d6

View File

@ -18,12 +18,16 @@ Last-In AI is your friendly neighborhood research paper analyzer that turns thos
- 🔍 **Smart Analysis**: Reads papers so you don't have to (but you probably should anyway) - 🔍 **Smart Analysis**: Reads papers so you don't have to (but you probably should anyway)
- 📊 **PDF Processing**: Turns those pesky PDFs into structured data faster than a grad student's coffee run - 📊 **PDF Processing**: Turns those pesky PDFs into structured data faster than a grad student's coffee run
- 🤹 **Orchestration**: Juggles multiple tasks like a circus professional - 🤹 **Orchestration**: Juggles multiple tasks like a circus professional
- 🔄 **Multi-Provider Support**: Switch between different LLM providers like a DJ switching tracks
- ⚙️ **Flexible Configuration**: Customize your LLM settings without breaking a sweat
## 🏗️ Architecture ## 🏗️ Architecture
``` ```
src/ src/
├── analysis/ # Where the magic happens 🎩 ├── analysis/ # Where the magic happens 🎩
│ ├── analysis_engine.py # Core analysis logic
│ └── llm_provider.py # LLM provider abstraction
├── data_acquisition/ # Paper fetching wizardry 📥 ├── data_acquisition/ # Paper fetching wizardry 📥
├── orchestration/ # The puppet master 🎭 ├── orchestration/ # The puppet master 🎭
└── processing/ # PDF wrestling championship 📄 └── processing/ # PDF wrestling championship 📄
@ -53,6 +57,7 @@ pip install -r requirements.txt
```bash ```bash
cp .env.example .env cp .env.example .env
# Edit .env with your favorite text editor # Edit .env with your favorite text editor
# Don't forget to add your chosen LLM provider's API key!
``` ```
### Method 2: Docker Installation 🐳 ### Method 2: Docker Installation 🐳
@ -89,6 +94,24 @@ controller.process_papers("quantum_computing")
# Now go grab a coffee, you've earned it! # Now go grab a coffee, you've earned it!
``` ```
## ⚙️ LLM Configuration
The system supports multiple LLM providers out of the box. Configure your preferred provider in `config/settings.yaml`:
```yaml
llm:
provider: deepseek # Supported: openai, deepseek
temperature: 0.5
max_tokens: 4096
model: deepseek-r1 # Model name specific to the provider
```
Don't forget to set your API keys in `.env`:
```bash
OPENAI_API_KEY=your-openai-key-here
DEEPSEEK_API_KEY=your-deepseek-key-here
```
## 🛠️ Development ## 🛠️ Development
### Running Tests ### Running Tests
@ -104,6 +127,12 @@ docker compose -f docker/docker-compose.yml run --rm app python -m pytest
- See `.env.example` for required configuration - See `.env.example` for required configuration
- Docker configurations are documented in `docker/README.md` - Docker configurations are documented in `docker/README.md`
### Adding New LLM Providers
1. Extend the `LLMProvider` class in `src/analysis/llm_provider.py`
2. Implement the `get_llm()` method for your provider
3. Add your provider to the factory function
4. Update configuration as needed
## 🤝 Contributing ## 🤝 Contributing
Found a bug? Want to add a feature? Have a brilliant idea? We're all ears! Found a bug? Want to add a feature? Have a brilliant idea? We're all ears!
@ -123,6 +152,7 @@ MIT License - Because sharing is caring! See [LICENSE](LICENSE) for more details
- This README was written by an AI (plot twist!) - This README was written by an AI (plot twist!)
- The code is probably smarter than most of us - The code is probably smarter than most of us
- We counted the coffee cups consumed during development, but lost count at 42 - We counted the coffee cups consumed during development, but lost count at 42
- Our LLM providers are like pizza toppings - everyone has their favorite!
--- ---
Made with 💻 and questionable amounts of ☕ Made with 💻 and questionable amounts of ☕