Results
Overview
Diagrams below exemplify executions of each of the compared approaches for the SockShop application. Rectangular boxes at the top of each diagram represent service combinations. Circles represent VM types, which are sorted by their cost. The color of each circle describe the execution state:
- Green circle: executed experiments where VM type meets the performance target.
- Red circle: executed experiments where VM type does not meet the performance target.
- Orange circles: VM types that are determined not to meat the performance target because of Condition 1.
- White circles: are not executed VM types.
SF :
SF1 :
SF2 :
SF3 :
P :
Kuber :
Detailed results for each of the subject applications, which correspond to Figure 4 in the paper, can be found in the attached spreadsheets.
- Hotel Reservation Search Cost
- Media Microsvc Search Cost
- Social Network Search Cost
- Sockshop Search Cost
- Hotel Reservation Execution Time
- Media Microsvc Execution TIme
- Social Network Execution Time
- Sockshop Execution Time
Execution Time Breakdown for Kuber
The table below shows time spent by each of the approaches in each of the phases (setting up VMs for the experiments, executing the experiments, and running the WIP algorithm). While KUBER take 53 hours on average (the last column), the other three approaches execute for hundreds of hours on average. The total execution time of all the experiments is more than four months.
App | SortFind | SortFind + Condition 1 | SortFind + Condition 2 | SortFind + Condition 3 | Kuber | |||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Setup | Exper. | WID | Total | Setup | Exper. | WID | Total | Setup | Exper. | WID | Total | Setup | Exper. | WID | Total | Setup | Exper. | WID | Total | |
Hotel Reservation | >395 | >158 | 0 | >553 | 55 | 22 | 0 | 77 | 150 | 60 | 16 | 226 | 55 | 22 | 2 | 79 | 15 | 6 | 7 | 28 |
Media Service | >372 | >149 | 0 | >521 | 150 | 60 | 0 | 210 | 412 | 165 | 36 | 613 | 90 | 36 | 2 | 128 | 30 | 12 | 15 | 57 |
Social Network | >362 | >145 | 0 | >507 | 245 | 98 | 0 | 343 | 322 | 129 | 26 | 477 | 357 | 143 | 5 | 505 | 58 | 23 | 22 | 103 |
Sock Shop | >265 | >106 | 0 | >371 | 20 | 8 | 0 | 28 | 130 | 52 | 7 | 189 | 60 | 24 | 1 | 85 | 10 | 4 | 12 | 26 |
Average | >348 | >139 | 0 | >487 | 117 | 47 | 0 | 164 | 253 | 101 | 21 | 375 | 140 | 56 | 2.5 | 198 | 28 | 11 | 14 | 53 |
Utilization of VMs by Application Test
To make sure our VM selection and configuration are appropriate for the selected subject applications and their workload, we run each service on each VM type in isolation. Table below shows the average CPU and memory utilization on the smallest and largest VM types, VM1 and VM11, respectively. As none of our VMs are overloaded by a single service, we believe our experiments / VM selection is appropriate.
Benchmark | CPU Utilization (%) | Memory Utilization (%) | ||
---|---|---|---|---|
VM1 | VM11 | VM1 | VM11 | |
Social Network | 37.7 | 3.1 | 30 | 2 |
Media Service | 26.27 | 2.91 | 29.73 | 1.89 |
Hotel Reservation | 27.13 | 3.13 | 28.5 | 1.8 |
Sock Shop | 33.86 | 4.57 | 30.71 | 1.93 |