Reimplementing the DES Carmen code in python
- Python 34.5%
- C 25.4%
- Perl 17.2%
- R 11.5%
- Shell 7.4%
- Other 4%
| carmen-files | ||
| docs | ||
| obsolete | ||
| policier-replication | ||
| random | ||
| test-data | ||
| testing-code | ||
| tools | ||
| utility | ||
| .gitignore | ||
| config.py | ||
| cpu.py | ||
| daemon-cleanup.sh | ||
| daemon-spanner.pl | ||
| daemon-stop.sh | ||
| doBaseline.sh | ||
| error.py | ||
| estimationNoise.py | ||
| files4greedy-evaluation.pl | ||
| files4pessimistic-evaluation.pl | ||
| gitlab-ci.yml | ||
| greedy-pareto.sh | ||
| greedy-test.sh | ||
| greedy_pareto.py | ||
| greedyEvolution.py | ||
| hv_eval.py | ||
| memo.txt | ||
| MoeaOptimize.py | ||
| MyDuplicates.py | ||
| MyProblem.py | ||
| MyServerCrossover.py | ||
| MyServerMutation.py | ||
| MyServerSampling.py | ||
| OnlyOneObject.py | ||
| Optimize.py | ||
| pareto-plot.r | ||
| pessimistic-tests.sh | ||
| pessimisticEstimate.py | ||
| plot-compare-rq3.r | ||
| plot-compare.r | ||
| random-baseline.sh | ||
| random_baseline.py | ||
| randomBaseline.sh | ||
| README.TXT | ||
| requirements.txt | ||
| rq3_5.csv | ||
| run1experiment.sh | ||
| run1GreedyParetoRQ2experiment.sh | ||
| run1MoeaRQ2experiment.sh | ||
| run1RQ2experiment.sh | ||
| screen-status.sh | ||
| ScreenDesDaemon.pl | ||
| ScreenRQ2DesDaemon.pl | ||
| ScreenRQ2GreedyDaemon.pl | ||
| ScreenRQ3DesDaemon.pl | ||
| server.py | ||
| server_farm.py | ||
| server_pool.py | ||
| simple-baseline.sh | ||
| simple-test.sh | ||
| simulation_engine.py | ||
| skill.sh | ||
| solution.py | ||
| span.sh | ||
| spanRQ2.sh | ||
| spanRQ3.pl | ||
| tc.py | ||
| test_cpu.py | ||
| test_set.py | ||
| test_sim_engine.py | ||
| test_solution.py | ||
| tmoea.py | ||
| tmoea.sh | ||
| vcpu.py | ||
This code is to simulate a set of servers running tasks. The idea is to partition tasks between servers and find close to optimal compromise cost/time. In order to do so the plan is to use pymoo NSGAII. The project is a follow up of Carmen Coviello project and Java implementation. Notice you need python 3.5 or higher, you need flake8 and yapf. flake8 is a syntax/style checker while yapf is a source code beautifier. As environment consider PyCharm community edition. There are a few directory. The code source in the current directory plus: docs: store documents here design ideas etc carmen-files: Carmen test sets and amazon data test-data: store simple test cases to manually validate here testing-code: baseline use of various structures to make sure at least they run tools: wrappers for flake8 and yapf Files and roles config.py: we should store here various project specific config parameters, almost empty for now error.py: error and warning procedures to simplify error handling cpu.py: the basic unit to model a processor it is the base class or vCPU and server vCPU.py: a server may have multiple vCPUs matter of fact it must have at least one , perhaps this may not be needed keeping for now server.py: it is the server model, it has an id and a reference to the amazon server config file. This is to say the server id is the line number where the server was found in the server pool configuration file server_pool.py: when we read servers configuration we create a server pool; the pool is of immutable object and is used to instantiate server farms. Each server farm is actually a configuration used to compute a solution. server_farm.py: a list of servers; it is created via the pool instantiate_farm(nservers, max_per_type) method calls. This takes 2 params: how many servers and how many server per type. Loosely, for now we create an array of nservers randomly choosing servers from the pool. For each server type we create at most max_per_type instances. Notice that since the servers have (possibly) multiple cpus, there is no simple relation server to cpu; we need to count them! solution.py: it is made our model: a farm and a test set,. plus the evaluate function, the fitness calculation tc.py: simple model of a test case test_set.py: this is an array of test cases. Notice when we instantiate a solution we do a deep copy of a test set and we can permutate them in this way we have variability but we do not screw up the original test set EXECUTION ------------- You can run the pythin code or use the wrapper ./runExsample.sh see the parameter passes or the example below running it into a detached screen: screen -dmS DES ./runExsample.sh -t carmen-files/dataset/datasetFastJFreeChart.csv -s carmen-files/systems/serverRealiAmazon.csv