Optimizing PC – Myths and Reality

Numerous websites and advice books define the word “optimization” as changing various system settings: normal, not quite normal and not normal at all, as well as disabling various system functions. It is interesting to note that the author has never come across any reliable measurement results demonstrating that the system actually became faster after that.

As a rule, either figurative comparisons are given (“the system literally flew”), or it is declared without any evidence that the system will work faster. How much faster, as it seems to the author, depends on the ratio of modesty and enthusiasm in the personal qualities of the person who compiled this version of advice. Modest people promise tens of percent increase, less restrained – times, and once I met a promise of 10-50 times speed increase.

However, it’s fair to say that numbers showing changes after optimization are still found from time to time, but in most cases we’re talking not about speed but system loading speed. Even rarer are the numbers showing speed gains in a particular game or performance test, usually 3DMark.

Of course, there is some connection between loading speed and operating speed, but it is not as direct as “optimizers” would like, and sometimes even the opposite – an increase in loading speed may reduce the launch speed of programs.

To complete the picture, we can mention the cases of deliberately false data, such as, for example, a widespread story about significant, up to 175%, speed increase of Windows XP, if during installation you choose the hardware abstraction level for computers with 80486C processor.

So how does this optimization affect computer speed? In the vast majority of cases, either the improvements are so microscopic that they cannot be noticed even by the well-armed eye, or you can get performance degradation. Fortunately, in many cases, also microscopic. On the other hand, even with the naked eye it is clearly visible that time is wasted first on studying all these tips, then on applying them, and then, quite often, on searching for the causes of various abnormalities in the system operation and eliminating them. And these abnormalities, unfortunately, appear more often than we would like, because almost always advisors on “improvement” do not talk about the side effects of changing this or that setting.

By now, most experts understand the pointlessness of such “optimizations” and that they do not save the main resource – the time of the user sitting at the computer. Any reasonable person also understands that it’s silly to hope that there are hidden resources in Windows, which can significantly increase the system’s performance, but these resources are unknown to developers, although they are trumpeted on every corner of the Internet.

But the number of enthusiasts who want to “overclock” the system the same way they “overclock” the processor is still quite large. And you can find a lot of feedback about how the system runs noticeably faster after a particular change. But if the system speed has not increased, then where do such reviews come from?

A little bit about human psychophysiology

It would seem, what does this have to do with the topic of discussion? The most direct!

Since, as it has already been said, there is practically no objective data on the usefulness of this or that advice, it is worth to understand how much we can trust our senses. As sad as it is to say, we cannot trust them. Human senses are not measuring instruments, and the brain is not a computer. All sensations turn out to be subjective: just remember the kettle, which boils for a long time when you wait for it, and quickly, if you put it on and do other things.

If a person expects some result, then subconsciously he will adjust his sensations in its favor, and there is no way to force yourself to be absolutely objective. It is not without reason that during wine tasting experts are brought not “cabernet”, “riesling”, “Moselle”, but numbered glasses.

Let me give you an example from life. As you probably know or remember from personal experience, about ten or so years ago there were motherboards for Pentium processors, where the second level cache could cache only 64 MB of memory. If more memory was put into that board, the Windows kernel and drivers got into an uncacheable memory area, and it slowed down by about 10%.

I wrote a program that attempted to force Windows to boot into a cacheable area. It did not always work, but if it did it restored performance. Some of the feedback I received wrote the following: “I installed the program, it wrote ‘not installed’ in its protocol, but I’m AWARE that it installed because I can see SIGNIFICANT acceleration.” After clarifying questions on my part, it turned out that the program was running on a Pentium III computer, where it could not, in principle, improve anything. But the algorithm used in the program and its description in the documentation were convincing enough to give a feeling of noticeable acceleration.

Blind method

So still, if it seems that the system works faster, does it really work faster, or do I have to cross myself to see the truth? No, you don’t have to cross yourself. You have to use the blind, or better yet, the double blind method. If you haven’t heard what it is, I’ll explain: in the blind method, the examinee (or evaluator, or expert, or taster, etc.) does not know what he is evaluating, he is given several numbered options, among which he makes a choice.

When applied to the topic under discussion, this means that you have to work on the computer several times, and from time to time both modified settings and the original ones can be applied. You give each option a grade, and then those grades are compared with the records of which settings correspond to which grade. However, this method is not considered a completely clean method because the person who prepares your test environment knows exactly what he’s doing and can in some indirect way, even against his wishes, let you know what configuration is being used.

For example, I once happened to read about a supposedly “blind” comparison of cables for sound equipment, in which a man changed numbered cables and then “listened” to them. It was claimed that he did not know which cable he was putting at the moment, although the cables being compared differed in diameter several times and, therefore, it was not very difficult to understand which cable was which.

With the double-blind method the person who changes the configuration does not have to know himself which configuration he turns on. Naturally, the organization of the double-blind method requires more effort, time and people involved. That practically excludes the application of this method in home conditions.

There is, however, a relatively simple way to get comparison results (but a little less true), and it, among others, is used at Microsoft. It consists in placing two identical computers with different software or different configuration of that software next to each other, two people perform the same actions on those computers, and several other people evaluate which computer is faster to work on.

Shall we measure the results?

Well, since you can’t trust the measurements by eye, you have to use a stopwatch and different programs that measure speed, don’t you? It would seem that there is nothing easier: run the test, record the results, tweak the settings, run the test again, compare the results, and voila, everything is clear and obvious – the speed has either increased, decreased, or not changed.

But they say that the devil is in the details, and it’s not for nothing that Einstein’s words come to mind: “Common sense is the set of prejudices a person has accumulated before he turned 18”.

In fact, it’s not that simple, because measurements never yield absolutely accurate results. In everyday life, this inaccuracy is often neglected, which, however, for domestic purposes is justified. But, unfortunately, this habit, ingrained in flesh and blood, cannot be mechanically transferred to all cases of measurements.

The complex of a computer and a modern operating system is a complex system (excuse the tautology), where many processes are going on simultaneously, and the absolute repeatability is impossible to achieve. For example, in the second test, the files may have been arranged differently on the disk, which, of course, though to a small extent, will affect the results.

Moreover, even such tests as video adapter performance tests may give different results, as they are called, on the spot. To give you an example, see this forum post. Despite the fact that no real changes were made to the system (let me remind you that Windows XP does not use the registry setting that was changed), the results turned out to be different.

Close
Disktective © Copyright 2021. All rights reserved.
Close