Examination of just about any PCB, and especially a PCB of a switch mode converter, reveals many 0.1uF capacitors which are spread all over the board. When asking designer how did they arrive at the 0.1uF value, the answer is often “this the customary value”. In some cases, the 0.1uF caps are connected in parallel to a capacitor of a larger capacitance, 1uF or even 10uF. At first glance this seems strange, how much can the 0.1uF cap add to 10uF cap? And, again, when asking designers what is the purpose of the parallel 0.1uF caps, the range of answers is often from “it is customary to do so” to “the 0.1uF cap filters the high frequency while the 10uF cap filters the low frequency”. It thus appears that in many cases the 0.1uF value is chosen by tradition rather than by an engineering design that based on the required ripple and power loss of the cap in the given application

From a statistical point of view, it would seem impossible that for the millions if not billions of circuits that require capacitor filters, the 0.1uF capacitor is exactly the correct value. Why not 0.8uF or 1.2uF? I guess this is because we feel more comfortable with round numbers. I would dare to guess that the number of 0.1uF capacitors sold worldwide is at least an order of magnitude larger than the number of, say, 0.15uF capacitors sold. An there is of course no engineering reason for that. The probability that most circuits require exactly a 0.1uF capacitor is clearly zero.

So, what happens if a 0.1uF cap is chosen “by tradition”. Well, if it is too large for the application than the circuit will probably work OK but the BOM will become more expensive due to the larger than needed capacitors. If, however, the cap is too small than either this will be caught in final testing on when the products will start coming back due to malfunction. In either way selecting a component by tradition rather by the design is costly.