158 lines
4.7 KiB
Markdown
158 lines
4.7 KiB
Markdown
# [ Work in Progress ]
|
|
# 🤖 Last-In AI: Your Papers Please!
|
|
|
|
[](https://www.python.org/downloads/)
|
|
[](https://arxiv.org)
|
|
[](LICENSE)
|
|
[](docker/README.md)
|
|
|
|
> *"Because reading research papers manually is so 2023!"* 🎯
|
|
|
|
## 🎭 What's This All About?
|
|
|
|
Last-In AI is your friendly neighborhood research paper analyzer that turns those dense academic PDFs into digestible insights. Think of it as your very own academic paper whisperer, but with more silicon and less caffeine.
|
|
|
|
## 🌟 Features
|
|
|
|
- 📚 **arXiv Integration**: Fetches papers faster than you can say "peer review"
|
|
- 🔍 **Smart Analysis**: Reads papers so you don't have to (but you probably should anyway)
|
|
- 📊 **PDF Processing**: Turns those pesky PDFs into structured data faster than a grad student's coffee run
|
|
- 🤹 **Orchestration**: Juggles multiple tasks like a circus professional
|
|
- 🔄 **Multi-Provider Support**: Switch between different LLM providers like a DJ switching tracks
|
|
- ⚙️ **Flexible Configuration**: Customize your LLM settings without breaking a sweat
|
|
|
|
## 🏗️ Architecture
|
|
|
|
```
|
|
src/
|
|
├── analysis/ # Where the magic happens 🎩
|
|
│ ├── analysis_engine.py # Core analysis logic
|
|
│ └── llm_provider.py # LLM provider abstraction
|
|
├── data_acquisition/ # Paper fetching wizardry 📥
|
|
├── orchestration/ # The puppet master 🎭
|
|
└── processing/ # PDF wrestling championship 📄
|
|
|
|
docker/ # Container configuration 🐳
|
|
├── Dockerfile # Multi-stage build definition
|
|
├── docker-compose.yml # Service orchestration
|
|
└── README.md # Docker setup documentation
|
|
```
|
|
|
|
## 🚀 Getting Started
|
|
|
|
### Method 1: Local Installation
|
|
|
|
1. Clone this repository (because good things should be shared)
|
|
```bash
|
|
git clone https://git.stevanovic.co.uk/kpcto/lastin-ai.git
|
|
cd lastin-ai
|
|
```
|
|
|
|
2. Install dependencies (they're like friends, but for your code)
|
|
```bash
|
|
pip install -r requirements.txt
|
|
```
|
|
|
|
3. Set up your environment (like making your bed, but more technical)
|
|
```bash
|
|
cp .env.example .env
|
|
# Edit .env with your favorite text editor
|
|
# Don't forget to add your chosen LLM provider's API key!
|
|
```
|
|
|
|
### Method 2: Docker Installation 🐳
|
|
|
|
1. Clone and navigate to the repository
|
|
```bash
|
|
git clone https://git.stevanovic.co.uk/kpcto/lastin-ai.git
|
|
cd lastin-ai
|
|
```
|
|
|
|
2. Set up environment variables
|
|
```bash
|
|
cp .env.example .env
|
|
# Edit .env with your configuration
|
|
```
|
|
|
|
3. Build and run with Docker Compose
|
|
```bash
|
|
docker compose -f docker/docker-compose.yml up --build
|
|
```
|
|
|
|
For detailed Docker setup and configuration options, see [Docker Documentation](docker/README.md).
|
|
|
|
## 🎮 Usage
|
|
|
|
```python
|
|
from src.orchestration.agent_controller import AgentController
|
|
|
|
# Initialize the brain
|
|
controller = AgentController()
|
|
|
|
# Let it do its thing
|
|
controller.process_papers("quantum_computing")
|
|
# Now go grab a coffee, you've earned it!
|
|
```
|
|
|
|
## ⚙️ LLM Configuration
|
|
|
|
The system supports multiple LLM providers out of the box. Configure your preferred provider in `config/settings.yaml`:
|
|
|
|
```yaml
|
|
llm:
|
|
provider: deepseek # Supported: openai, deepseek
|
|
temperature: 0.5
|
|
max_tokens: 4096
|
|
model: deepseek-r1 # Model name specific to the provider
|
|
```
|
|
|
|
Don't forget to set your API keys in `.env`:
|
|
```bash
|
|
OPENAI_API_KEY=your-openai-key-here
|
|
DEEPSEEK_API_KEY=your-deepseek-key-here
|
|
```
|
|
|
|
## 🛠️ Development
|
|
|
|
### Running Tests
|
|
```bash
|
|
# Local
|
|
python -m pytest
|
|
|
|
# Docker
|
|
docker compose -f docker/docker-compose.yml run --rm app python -m pytest
|
|
```
|
|
|
|
### Environment Variables
|
|
- See `.env.example` for required configuration
|
|
- Docker configurations are documented in `docker/README.md`
|
|
|
|
### Adding New LLM Providers
|
|
1. Extend the `LLMProvider` class in `src/analysis/llm_provider.py`
|
|
2. Implement the `get_llm()` method for your provider
|
|
3. Add your provider to the factory function
|
|
4. Update configuration as needed
|
|
|
|
## 🤝 Contributing
|
|
|
|
Found a bug? Want to add a feature? Have a brilliant idea? We're all ears!
|
|
Just remember:
|
|
1. Fork it 🍴
|
|
2. Branch it 🌿
|
|
3. Code it 💻
|
|
4. Test it 🧪
|
|
5. PR it 🎯
|
|
|
|
## 📜 License
|
|
|
|
MIT License - Because sharing is caring! See [LICENSE](LICENSE) for more details.
|
|
|
|
## 🎭 Fun Facts
|
|
|
|
- This README was written by an AI (plot twist!)
|
|
- The code is probably smarter than most of us
|
|
- We counted the coffee cups consumed during development, but lost count at 42
|
|
- Our LLM providers are like pizza toppings - everyone has their favorite!
|
|
|
|
---
|
|
Made with 💻 and questionable amounts of ☕ |