# [ Work in Progress ] # ๐Ÿค– Last-In AI: Your Papers Please! [![Python](https://img.shields.io/badge/python-3.8+-blue.svg)](https://www.python.org/downloads/) [![arXiv](https://img.shields.io/badge/arXiv-2401.00000-b31b1b.svg)](https://arxiv.org) [![License](https://img.shields.io/badge/License-MIT-green.svg)](LICENSE) [![Docker](https://img.shields.io/badge/docker-ready-brightgreen.svg)](docker/README.md) > *"Because reading research papers manually is so 2023!"* ๐ŸŽฏ ## ๐ŸŽญ What's This All About? Last-In AI is your friendly neighborhood research paper analyzer that turns those dense academic PDFs into digestible insights. Think of it as your very own academic paper whisperer, but with more silicon and less caffeine. ## ๐ŸŒŸ Features - ๐Ÿ“š **arXiv Integration**: Fetches papers faster than you can say "peer review" - ๐Ÿ” **Smart Analysis**: Reads papers so you don't have to (but you probably should anyway) - ๐Ÿ“Š **PDF Processing**: Turns those pesky PDFs into structured data faster than a grad student's coffee run - ๐Ÿคน **Orchestration**: Juggles multiple tasks like a circus professional - ๐Ÿ”„ **Multi-Provider Support**: Switch between different LLM providers like a DJ switching tracks - โš™๏ธ **Flexible Configuration**: Customize your LLM settings without breaking a sweat ## ๐Ÿ—๏ธ Architecture ``` src/ โ”œโ”€โ”€ analysis/ # Where the magic happens ๐ŸŽฉ โ”‚ โ”œโ”€โ”€ analysis_engine.py # Core analysis logic โ”‚ โ””โ”€โ”€ llm_provider.py # LLM provider abstraction โ”œโ”€โ”€ data_acquisition/ # Paper fetching wizardry ๐Ÿ“ฅ โ”œโ”€โ”€ orchestration/ # The puppet master ๐ŸŽญ โ””โ”€โ”€ processing/ # PDF wrestling championship ๐Ÿ“„ docker/ # Container configuration ๐Ÿณ โ”œโ”€โ”€ Dockerfile # Multi-stage build definition โ”œโ”€โ”€ docker-compose.yml # Service orchestration โ””โ”€โ”€ README.md # Docker setup documentation ``` ## ๐Ÿš€ Getting Started ### Method 1: Local Installation 1. Clone this repository (because good things should be shared) ```bash git clone https://git.stevanovic.co.uk/kpcto/lastin-ai.git cd lastin-ai ``` 2. Install dependencies (they're like friends, but for your code) ```bash pip install -r requirements.txt ``` 3. Set up your environment (like making your bed, but more technical) ```bash cp .env.example .env # Edit .env with your favorite text editor # Don't forget to add your chosen LLM provider's API key! ``` ### Method 2: Docker Installation ๐Ÿณ 1. Clone and navigate to the repository ```bash git clone https://git.stevanovic.co.uk/kpcto/lastin-ai.git cd lastin-ai ``` 2. Set up environment variables ```bash cp .env.example .env # Edit .env with your configuration ``` 3. Build and run with Docker Compose ```bash docker compose -f docker/docker-compose.yml up --build ``` For detailed Docker setup and configuration options, see [Docker Documentation](docker/README.md). ## ๐ŸŽฎ Usage ```python from src.orchestration.agent_controller import AgentController # Initialize the brain controller = AgentController() # Let it do its thing controller.process_papers("quantum_computing") # Now go grab a coffee, you've earned it! ``` ## โš™๏ธ LLM Configuration The system supports multiple LLM providers out of the box. Configure your preferred provider in `config/settings.yaml`: ```yaml llm: provider: deepseek # Supported: openai, deepseek temperature: 0.5 max_tokens: 4096 model: deepseek-r1 # Model name specific to the provider ``` Don't forget to set your API keys in `.env`: ```bash OPENAI_API_KEY=your-openai-key-here DEEPSEEK_API_KEY=your-deepseek-key-here ``` ## ๐Ÿ› ๏ธ Development ### Running Tests ```bash # Local python -m pytest # Docker docker compose -f docker/docker-compose.yml run --rm app python -m pytest ``` ### Environment Variables - See `.env.example` for required configuration - Docker configurations are documented in `docker/README.md` ### Adding New LLM Providers 1. Extend the `LLMProvider` class in `src/analysis/llm_provider.py` 2. Implement the `get_llm()` method for your provider 3. Add your provider to the factory function 4. Update configuration as needed ## ๐Ÿค Contributing Found a bug? Want to add a feature? Have a brilliant idea? We're all ears! Just remember: 1. Fork it ๐Ÿด 2. Branch it ๐ŸŒฟ 3. Code it ๐Ÿ’ป 4. Test it ๐Ÿงช 5. PR it ๐ŸŽฏ ## ๐Ÿ“œ License MIT License - Because sharing is caring! See [LICENSE](LICENSE) for more details. ## ๐ŸŽญ Fun Facts - This README was written by an AI (plot twist!) - The code is probably smarter than most of us - We counted the coffee cups consumed during development, but lost count at 42 - Our LLM providers are like pizza toppings - everyone has their favorite! --- Made with ๐Ÿ’ป and questionable amounts of โ˜•