Installation
How to install MRRA mobility prediction framework
Installation
Get started with MRRA by installing it in your Python environment.
Prerequisites
MRRA requires Python version 3.10 or higher to run properly.
- Python >= 3.10
- pip package manager
Basic Installation
# Basic installation
pip install mrra# Using conda (when available)
conda install -c conda-forge mrra# Using Poetry
poetry add mrraInstallation Variants
MRRA provides several installation options depending on your needs:
Development Installation
# Include development tools
pip install mrra[dev]GPU Support
# Include GPU acceleration support
pip install mrra[gpu]Complete Installation
# Include all optional dependencies
pip install mrra[all]Verify Installation
After installation, verify that MRRA is properly installed:
# Check if installed correctly
import mrra
print(f"MRRA version: {mrra.__version__}")
# Test basic functionality
from mrra import AgentBuilder
agent_builder = AgentBuilder()
print("✅ MRRA installed successfully!")Development Installation
If you want to contribute to MRRA or run the latest development version:
# Clone the repository
git clone https://github.com/yourusername/mrra.git
cd mrra
# Install in development mode
pip install -e .[dev]
# Run tests to verify
pytestPost-Installation Setup
After installation, you'll need to configure your LLM provider and other services:
1. Configure LLM Provider
Set up your preferred LLM provider:
from mrra import AgentBuilder
# OpenAI Configuration
agent = AgentBuilder().with_llm(
provider="openai",
model="gpt-4o-mini",
api_key="your-openai-api-key"
)
# Qwen Configuration
agent = AgentBuilder().with_llm(
provider="qwen",
model="qwen-plus",
base_url="https://dashscope.aliyuncs.com/compatible-mode/v1",
api_key="your-qwen-api-key"
)2. Test Your Setup
Run a quick test to ensure everything is working:
from mrra import AgentBuilder
from mrra.core.types import TrajectoryBatch
import pandas as pd
# Create sample data
df = pd.DataFrame({
'user_id': ['user_1', 'user_1'],
'timestamp': ['2023-01-01 09:00:00', '2023-01-01 12:00:00'],
'latitude': [31.2304, 31.2404],
'longitude': [121.4737, 121.4837],
'poi_id': ['poi_1', 'poi_2']
})
# Test basic prediction
agent = (AgentBuilder()
.with_llm(provider="openai", model="gpt-4o-mini", api_key="your-key")
.build())
trajectory_batch = TrajectoryBatch(df)
result = agent.predict_next(user_id="user_1", history=trajectory_batch)
print(f"✅ Prediction successful: {result.predicted_location}")Environment Variables
You can also configure MRRA using environment variables:
# Set environment variables
export MRRA_OPENAI_API_KEY="your-openai-key"
export MRRA_QWEN_API_KEY="your-qwen-key"
export MRRA_DEFAULT_PROVIDER="openai"
export MRRA_DEFAULT_MODEL="gpt-4o-mini"Troubleshooting Installation Issues
Dependency Conflicts
If you encounter dependency conflicts:
# Create a clean virtual environment
python -m venv mrra-env
source mrra-env/bin/activate # On Windows: mrra-env\Scripts\activate
# Install MRRA
pip install mrraGPU Support Issues
For GPU acceleration issues:
# Install with specific CUDA version
pip install mrra[gpu] --extra-index-url https://download.pytorch.org/whl/cu118
# Verify GPU support
python -c "import torch; print(f'CUDA available: {torch.cuda.is_available()}')"Import Errors
If you encounter import errors:
# Check installation
pip show mrra
# Reinstall if necessary
pip uninstall mrra
pip install mrra --no-cache-dirUpdating MRRA
Keep MRRA up to date with the latest features:
# Update to latest version
pip install --upgrade mrra
# Update with all dependencies
pip install --upgrade mrra[all]Uninstallation
To remove MRRA from your system:
# Uninstall MRRA
pip uninstall mrra
# Clean up dependencies (if needed)
pip autoremoveConfiguration files and cached data will remain after uninstallation. You can manually clean them up if needed.
Next Steps
Now that MRRA is installed, proceed to:
- Configuration - Set up your LLM providers and services
- Examples - See real-world usage examples
- Development - Contributing and development guidelines