|
| 1 | +# Redis Test App Workloads |
| 2 | + |
| 3 | +This document describes all available workloads and their specific Redis operations. |
| 4 | + |
| 5 | +## 🎯 Available Workloads |
| 6 | + |
| 7 | +### **1. BasicWorkload (basic_rw)** |
| 8 | +**Purpose**: Basic read/write operations for general testing |
| 9 | +**Operations**: |
| 10 | +- `SET` → `client.set(key, value)` - Set key-value pairs |
| 11 | +- `GET` → `client.get(key)` - Retrieve values by key |
| 12 | +- `DEL` → `client.delete(key)` - Delete keys |
| 13 | +- `INCR` → `client.incr(key)` - Increment counters |
| 14 | + |
| 15 | +**Use Cases**: |
| 16 | +- General Redis performance testing |
| 17 | +- Basic functionality validation |
| 18 | +- Mixed read/write workloads |
| 19 | + |
| 20 | +### **2. ListWorkload (list_operations)** |
| 21 | +**Purpose**: List data structure operations |
| 22 | +**Operations**: |
| 23 | +- `LPUSH` → `client.lpush(key, value)` - Push to left of list |
| 24 | +- `RPUSH` → `client.rpush(key, value)` - Push to right of list |
| 25 | +- `LPOP` → `client.lpop(key)` - Pop from left of list |
| 26 | +- `RPOP` → `client.rpop(key)` - Pop from right of list |
| 27 | +- `LRANGE` → `client.lrange(key, start, end)` - Get range of elements |
| 28 | + |
| 29 | +**Use Cases**: |
| 30 | +- Queue implementations |
| 31 | +- Stack operations |
| 32 | +- List processing workloads |
| 33 | + |
| 34 | +### **3. PipelineWorkload (high_throughput)** |
| 35 | +**Purpose**: Batch operations using Redis pipelines |
| 36 | +**Operations**: |
| 37 | +- `SET` → `pipe.set(key, value)` - Batched sets |
| 38 | +- `GET` → `pipe.get(key)` - Batched gets |
| 39 | +- `INCR` → `pipe.incr(key)` - Batched increments |
| 40 | + |
| 41 | +**Features**: |
| 42 | +- Configurable batch size (default: 10) |
| 43 | +- Reduced network round trips |
| 44 | +- Higher throughput performance |
| 45 | + |
| 46 | +**Use Cases**: |
| 47 | +- High-throughput scenarios (your 700K+ ops/sec!) |
| 48 | +- Bulk data operations |
| 49 | +- Performance optimization testing |
| 50 | + |
| 51 | +### **4. PubSubWorkload (pubsub_test)** |
| 52 | +**Purpose**: Publish/Subscribe messaging patterns |
| 53 | +**Operations**: |
| 54 | +- `PUBLISH` → `client.publish(channel, message)` - Send messages |
| 55 | +- `SUBSCRIBE` → Background subscriber thread - Receive messages |
| 56 | + |
| 57 | +**Features**: |
| 58 | +- Multi-channel support |
| 59 | +- Background subscriber threads |
| 60 | +- Real-time messaging testing |
| 61 | + |
| 62 | +**Use Cases**: |
| 63 | +- Real-time messaging systems |
| 64 | +- Event-driven architectures |
| 65 | +- Notification systems |
| 66 | + |
| 67 | +### **5. TransactionWorkload (transaction_test)** |
| 68 | +**Purpose**: Multi-command transactions with ACID properties |
| 69 | +**Operations**: |
| 70 | +- `MULTI` → Start transaction |
| 71 | +- `SET/GET/INCR` → Queued operations |
| 72 | +- `EXEC` → Execute transaction |
| 73 | + |
| 74 | +**Use Cases**: |
| 75 | +- ACID transaction testing |
| 76 | +- Complex multi-step operations |
| 77 | +- Data consistency validation |
| 78 | + |
| 79 | +## 🔧 Configuration Examples |
| 80 | + |
| 81 | +### **Basic Read/Write** |
| 82 | +```bash |
| 83 | +python main.py run --workload-profile basic_rw --target-ops-per-second 1000 |
| 84 | +``` |
| 85 | + |
| 86 | +### **High Throughput Pipeline** |
| 87 | +```bash |
| 88 | +python main.py run --workload-profile high_throughput --target-ops-per-second 100000 |
| 89 | +``` |
| 90 | + |
| 91 | +### **List Operations** |
| 92 | +```bash |
| 93 | +python main.py run --workload-profile list_operations --target-ops-per-second 5000 |
| 94 | +``` |
| 95 | + |
| 96 | +### **Pub/Sub Testing** |
| 97 | +```bash |
| 98 | +python main.py run --workload-profile pubsub_test --target-ops-per-second 1000 |
| 99 | +``` |
| 100 | + |
| 101 | +### **Transaction Testing** |
| 102 | +```bash |
| 103 | +python main.py run --workload-profile transaction_test --target-ops-per-second 500 |
| 104 | +``` |
| 105 | + |
| 106 | +## 📊 Metrics Per Operation |
| 107 | + |
| 108 | +Each workload now provides **operation-specific metrics**: |
| 109 | + |
| 110 | +### **Prometheus Metrics** |
| 111 | +``` |
| 112 | +redis_operations_total{operation="SET", status="success"} 1234 |
| 113 | +redis_operations_total{operation="GET", status="success"} 5678 |
| 114 | +redis_operations_total{operation="LPUSH", status="success"} 910 |
| 115 | +redis_operations_total{operation="PUBLISH", status="success"} 112 |
| 116 | +``` |
| 117 | + |
| 118 | +### **Latency Histograms** |
| 119 | +``` |
| 120 | +redis_operation_duration_seconds_bucket{operation="SET", le="0.001"} 800 |
| 121 | +redis_operation_duration_seconds_bucket{operation="GET", le="0.001"} 900 |
| 122 | +redis_operation_duration_seconds_bucket{operation="LPUSH", le="0.005"} 50 |
| 123 | +``` |
| 124 | + |
| 125 | +## 🎯 Grafana Dashboard Queries |
| 126 | + |
| 127 | +### **Operations Rate by Type** |
| 128 | +```promql |
| 129 | +rate(redis_operations_total[5m]) by (operation, status) |
| 130 | +``` |
| 131 | + |
| 132 | +### **Latency Percentiles by Operation** |
| 133 | +```promql |
| 134 | +histogram_quantile(0.95, rate(redis_operation_duration_seconds_bucket[5m])) by (operation) |
| 135 | +``` |
| 136 | + |
| 137 | +### **Error Rate by Operation** |
| 138 | +```promql |
| 139 | +rate(redis_operations_total{status="error"}[5m]) / rate(redis_operations_total[5m]) by (operation) |
| 140 | +``` |
| 141 | + |
| 142 | +## 🚀 Performance Characteristics |
| 143 | + |
| 144 | +### **BasicWorkload** |
| 145 | +- **Throughput**: 10K-50K ops/sec |
| 146 | +- **Latency**: P95 < 5ms |
| 147 | +- **Use Case**: General testing |
| 148 | + |
| 149 | +### **PipelineWorkload** |
| 150 | +- **Throughput**: 100K-1M+ ops/sec (your current 700K+!) |
| 151 | +- **Latency**: P95 < 10ms (batched) |
| 152 | +- **Use Case**: Maximum performance |
| 153 | + |
| 154 | +### **ListWorkload** |
| 155 | +- **Throughput**: 5K-20K ops/sec |
| 156 | +- **Latency**: P95 < 8ms |
| 157 | +- **Use Case**: Data structure testing |
| 158 | + |
| 159 | +### **PubSubWorkload** |
| 160 | +- **Throughput**: 1K-10K msgs/sec |
| 161 | +- **Latency**: P95 < 15ms |
| 162 | +- **Use Case**: Messaging patterns |
| 163 | + |
| 164 | +### **TransactionWorkload** |
| 165 | +- **Throughput**: 500-5K txns/sec |
| 166 | +- **Latency**: P95 < 20ms |
| 167 | +- **Use Case**: ACID compliance |
| 168 | + |
| 169 | +## 🔍 Operation-Specific Insights |
| 170 | + |
| 171 | +With the new **direct Redis client method calls** (no more `execute_command`), you get: |
| 172 | + |
| 173 | +### **🎯 True Operation-Level Metrics** |
| 174 | +- **Direct method calls**: `client.set()`, `client.get()`, `client.lpush()`, etc. |
| 175 | +- **Native Redis client**: Using the actual redis-py client methods |
| 176 | +- **Proper operation names**: Each operation tracked with its real Redis command name |
| 177 | +- **Accurate latencies**: Direct timing of actual Redis operations |
| 178 | + |
| 179 | +### **📊 Benefits** |
| 180 | +1. **Track per-operation performance** - See which Redis commands are fastest/slowest |
| 181 | +2. **Identify bottlenecks** - Find operations causing latency spikes |
| 182 | +3. **Optimize workloads** - Focus on problematic operation types |
| 183 | +4. **Compare Redis versions** - See how different versions handle specific operations |
| 184 | +5. **Detect memory leaks** - Monitor per-operation memory usage patterns |
| 185 | +6. **True Redis testing** - Using the same methods your applications would use |
| 186 | + |
| 187 | +### **🔧 Implementation Details** |
| 188 | +- **Helper method**: `_execute_with_metrics()` reduces code duplication |
| 189 | +- **Consistent timing**: All operations timed the same way |
| 190 | +- **Error handling**: Proper exception propagation with metrics |
| 191 | +- **Type safety**: Full type hints for all methods |
| 192 | + |
| 193 | +This gives you **production-grade, operation-specific observability** into your Redis performance testing! 🚀 |
0 commit comments