A comprehensive Robotic Process Automation (RPA) system that integrates KGG and OptimusPrime frameworks with advanced features including natural language control, predictive maintenance, and cross-system workflows.
- Cross-system workflows combining KGG & OptimusPrime
- Robust error handling with detailed logging
- HTML reporting with execution metrics
- Parameter passing between systems
- Conversational interface for RPA workflows
- Local LLM integration with OLLAMA
- Interactive mode for continuous commands
- Rule-based fallback for offline usage
- Anomaly detection in automation logs
- Component health monitoring with metrics
- Actionable recommendations for optimization
- Visual HTML reports with priority levels
- Web-based control panel
- Real-time monitoring of agents
- Visual workflow execution tracking
- One-click automation triggering
- Python 3.10+
- Conda environment management
- Node.js (for UI dashboard)
- OLLAMA (for local LLM capabilities)
# Clone the repository
git clone https://github.com/yourusername/integrated-rpa-system.git
cd integrated-rpa-system
# Create and activate conda environment
conda env create -f environment.yml
conda activate agent_f1_env
# Install additional dependencies
pip install -U langmem crewai
# Install OLLAMA (macOS)
brew install ollama
ollama pull mistral
# Install UI dependencies
cd ui
npm install./run_integrated_rpa.sh./run_nl_rpa.sh --interactive./run_predictive_maintenance.shcd ui
npm run devThe RPA System is deployed using GitHub Pages for the UI and Render for the API backend.
The UI is automatically deployed to GitHub Pages when changes are pushed to the main branch. You can access the deployed UI at:
https://yacinewhatchandcode.github.io/MultiAgenticSytemYBE/
The API can be deployed to Render.com using the following steps:
- Create a new Web Service on Render.com
- Connect your GitHub repository
- Select the 'api' directory as the root directory
- Set the build command to
pip install -r requirements.txt - Set the start command to
uvicorn main:app --host 0.0.0.0 --port $PORT - Add the environment variable
PYTHON_VERSION=3.10.0 - Deploy the service
- Update the
PRODUCTION_API_URLinui/app/config.tswith your Render service URL - Commit and push the changes to trigger a new UI deployment
To run the system locally for development:
-
Start the API:
./run_api.sh
-
Start the UI:
cd ui npm run dev -
Access the UI at http://localhost:3000
├── integrated_rpa_automation.py # Core integration system
├── nl_rpa_interface.py # Natural language interface
├── predictive_maintenance.py # Predictive maintenance system
├── run_integrated_rpa.sh # Runner script for integrated system
├── run_nl_rpa.sh # Runner script for NL interface
├── run_predictive_maintenance.sh # Runner script for maintenance
├── environment.yml # Conda environment definition
├── ui/ # Web dashboard
├── samples/ # Sample workflows
├── output/ # Output directory
└── logs/ # Log files
This system integrates with:
- KGG RPA System
- OptimusPrime Framework
- OLLAMA for local LLM capabilities
- Playwright for browser automation
- Selenium for UI testing
- OCR for image recognition
MIT
- CrewAI for agent orchestration
- Langchain for LLM integration
- Playwright and Selenium for browser automation
- OLLAMA for local LLM capabilities