JOpera is an extensible, java-based process engine, and is a central component of our tools on automation of experiments in the Cloud.
We need to extend the engine to allow a distributed and scalable persistence layer. This is one of the main requisite that we need to fulfill in order to create an elastic version of the JOpera engine that can automatically scale in clouds infrastructures.
We test elastic computing systems by imposing time-varying workloads on them, and successively, checking conditions on the metrics that we extract from system executions.
At the moment, the whole process is implemented as a black-box, and sometimes is difficult to understand what is going on inside the system, which is a prerequisite to improve it functioning when a test fails.
Dynamically adaptive systems (DAS) may change their inner business logic, software architecture, or quality of service as a reaction to changes in the environment or user requirements. DAS can also self-manage, that is, monitor their own behavior and plan suitable adaptations to achieve high level goals that are specified by their designers.
In many cases, the variables monitored by such systems are aggregated according to some specific operator. For example, their are averaged over a sliding window.
However, if systems can adapt while monitoring this variable they may introduce distortions that eventually will result in degraded performance of the system, or even critical failures.
Elastic applications are internally implemented by combining an adaptive application and an elasticity controller. The controller is usually build by making assumptions on the environment, and as long as these assumptions hold the behavior of the controller is predictable. In reality, situations may arise that invalidate these assumptions. For example, the incoming workload oscillates more frequently and with different intensity than the expectation.
Elastic computing systems can be described via models that capture their elastic nature.
We seek a way to derive elasticity models using a data-driven and iterative approach: We run experiments to collect data about the behavior of elastic computing systems under different execution conditions, we elaborate on these data to build systems elasticity models, we evaluate the models to find flaws or potential improvement, and we define new experiments to be run. We repeat this process to refine our initial models in an incremental fashion.