Dynamically adaptive systems (DAS) may change their inner business logic, software architecture, or quality of service as a reaction to changes in the environment or user requirements. DAS can also self-manage, that is, monitor their own behavior and plan suitable adaptations to achieve high level goals that are specified by their designers.
In many cases, the variables monitored by such systems are aggregated according to some specific operator. For example, their are averaged over a sliding window.
However, if systems can adapt while monitoring this variable they may introduce distortions that eventually will result in degraded performance of the system, or even critical failures.
For example, if metrics are aggregated over time and by the number of replicated components, it is not clear what happen when additional components are introduced in the running system (or removed from it). In this case, in fact, the data collected over the window refer to different system configuration and may not accurately reflect the real status of the system. For example, it’s actual CPU usage.
We want to investigate the amount of distortion inside monitored metrics, and quantify its effect on the quality of the software.