Guaranteeing the essential theoretical and practical performance is of utmost importance developing major Websites.
An adequate IT architecture ensures the theoretical performance. This architecture has i.e. to guarantee that the system’s partition is able to scale via parallelization wherever a task splits itself into independent tasks.
The practical performance has to be subsequently secured as well. Practical performance not only depends on the architecture adequately scaling and all components possessing the required capabilities but on many other details as well.
This is based on the knowledge that?at the top level?some steps that should only be executed once are often replicated because they aren’t conducted as the architecture stipulates.
Furthermore, processing of particular subtasks (requests of different users) often isn’t sufficiently parallelized.
However, sometimes the tricky part lies deeper within: Some servers run out of resources trying to cope with request processing management handling a multitude of accesses at once?as it is in the case of many major Websites.
The best way to achieve performance optimization is to proceed methodically.
It is of importance to instrument the system adequately during the actual development process. An acceptably detailed logging constitutes a part of this process. This instrument should be played consistently across all components facilitating the conduction of correspondent log file analyses.
Subsequently, the system should be evaluated by means of suited scenarios, i.e. assigning a number of system runs (click paths) as well as the specifications of the generated stress (the number of simulated users accessing an application simultaneously).
Afterwards, a prognosis is made regarding the possible system performance in a test run. These prognoses could consist of the identified response times of certain requests. Often the modification of response times during an increasing user number is of interest. It is possible to forecast the relative frequency of particular data set entries in the log files (“for every http://x.com/login called up there should be only one call of User::check_password”).
Following the previous tasks, the system is calibrated in an appropriate stress and performance test environment according to the pre-defined scenarios.
The quantified values are compared to the forecast values, and all deviations are subjected to a thorough analysis.
Subsequently, either the forecast or the base model has to be adjusted accordingly; either because the assumptions were wrong or?more often?because the system has to be optimized. These adjustments could be made by deleting unnecessary calls by accordingly caching information or by identifying the bottle necks. The most important question is: Which system resource (CPU, MEM, DISK-IO) is responsible of inhibiting the overall performance?
Of course, those bottle necks should be eliminated; possible solutions can take a variety of forms.
This process can be carried forward continuously, or terminated if a pre-determined performance requirement is accomplished. In the latter case, a terminal stress and performance test documents the accomplishment of the performance request.
We are proficient with optimization processes in all aspects due to a broad practical experience. We are in possession of specifications and codes that enable the implementation of appropriate loggings in the course of development. We are very familiar with the recurring process of modeling, forecasting, testing, evaluating, analyzing, and adjusting of systems (influenced by our natural scientific studies).
We have expert knowledge in the field of IT architecture and are able to build models, analyzing the deviations of forecasts and evaluated performance.
We possess well-founded technological knowledge so as to conduct any optimization process in a holistic approach.