This repository has been archived on 2025-02-10. You can view files and clone it, but cannot push or open issues or pull requests.
lastin-ai/README.md
2025-02-01 01:07:24 +00:00

4.7 KiB

[ Work in Progress ]

🤖 Last-In AI: Your Papers Please!

Python arXiv License Docker

"Because reading research papers manually is so 2023!" 🎯

🎭 What's This All About?

Last-In AI is your friendly neighborhood research paper analyzer that turns those dense academic PDFs into digestible insights. Think of it as your very own academic paper whisperer, but with more silicon and less caffeine.

🌟 Features

  • 📚 arXiv Integration: Fetches papers faster than you can say "peer review"
  • 🔍 Smart Analysis: Reads papers so you don't have to (but you probably should anyway)
  • 📊 PDF Processing: Turns those pesky PDFs into structured data faster than a grad student's coffee run
  • 🤹 Orchestration: Juggles multiple tasks like a circus professional
  • 🔄 Multi-Provider Support: Switch between different LLM providers like a DJ switching tracks
  • ⚙️ Flexible Configuration: Customize your LLM settings without breaking a sweat

🏗️ Architecture

src/
├── analysis/         # Where the magic happens 🎩
│   ├── analysis_engine.py    # Core analysis logic
│   └── llm_provider.py      # LLM provider abstraction
├── data_acquisition/ # Paper fetching wizardry 📥
├── orchestration/    # The puppet master 🎭
└── processing/       # PDF wrestling championship 📄

docker/              # Container configuration 🐳
├── Dockerfile       # Multi-stage build definition
├── docker-compose.yml # Service orchestration
└── README.md        # Docker setup documentation

🚀 Getting Started

Method 1: Local Installation

  1. Clone this repository (because good things should be shared)
git clone https://git.stevanovic.co.uk/kpcto/lastin-ai.git
cd lastin-ai
  1. Install dependencies (they're like friends, but for your code)
pip install -r requirements.txt
  1. Set up your environment (like making your bed, but more technical)
cp .env.example .env
# Edit .env with your favorite text editor
# Don't forget to add your chosen LLM provider's API key!

Method 2: Docker Installation 🐳

  1. Clone and navigate to the repository
git clone https://git.stevanovic.co.uk/kpcto/lastin-ai.git
cd lastin-ai
  1. Set up environment variables
cp .env.example .env
# Edit .env with your configuration
  1. Build and run with Docker Compose
docker compose -f docker/docker-compose.yml up --build

For detailed Docker setup and configuration options, see Docker Documentation.

🎮 Usage

from src.orchestration.agent_controller import AgentController

# Initialize the brain
controller = AgentController()

# Let it do its thing
controller.process_papers("quantum_computing")
# Now go grab a coffee, you've earned it!

⚙️ LLM Configuration

The system supports multiple LLM providers out of the box. Configure your preferred provider in config/settings.yaml:

llm:
  provider: deepseek  # Supported: openai, deepseek
  temperature: 0.5
  max_tokens: 4096
  model: deepseek-r1  # Model name specific to the provider

Don't forget to set your API keys in .env:

OPENAI_API_KEY=your-openai-key-here
DEEPSEEK_API_KEY=your-deepseek-key-here

🛠️ Development

Running Tests

# Local
python -m pytest

# Docker
docker compose -f docker/docker-compose.yml run --rm app python -m pytest

Environment Variables

  • See .env.example for required configuration
  • Docker configurations are documented in docker/README.md

Adding New LLM Providers

  1. Extend the LLMProvider class in src/analysis/llm_provider.py
  2. Implement the get_llm() method for your provider
  3. Add your provider to the factory function
  4. Update configuration as needed

🤝 Contributing

Found a bug? Want to add a feature? Have a brilliant idea? We're all ears! Just remember:

  1. Fork it 🍴
  2. Branch it 🌿
  3. Code it 💻
  4. Test it 🧪
  5. PR it 🎯

📜 License

MIT License - Because sharing is caring! See LICENSE for more details.

🎭 Fun Facts

  • This README was written by an AI (plot twist!)
  • The code is probably smarter than most of us
  • We counted the coffee cups consumed during development, but lost count at 42
  • Our LLM providers are like pizza toppings - everyone has their favorite!

Made with 💻 and questionable amounts of