Imagine for a moment that some mad scientist were to invent a machine, which we'll also suppose came out in a number of versions. The design of the machine is quite clever: it takes human beings in, does its functions, and when they come out the other end, the people are changed, almost like new people.
By and large, the machines produce changes in people that are positives. After going through a machine, people are imbued with a sense of meaning in life; they feel that they have answers to important questions; they have a more centered sense of esteem; they feel a greater degree of control over their personal circumstances; they have a greater social connection and adhesion to other people that came from the same machine, close versions of their own machine, and even people from other machines more heavily modified; and, in most respects, they come out of these machines by many objective measures as kinder, nicer, more satisfied individuals.
Indeed, research shows that people who are put through a machine self-report higher life satisfaction and happiness, and they simultaneously show a tendency to live longer, more fulfilled lives. The changes are often described as bringing great comfort and hope to people who go through them. The machines themselves are considered something of miracles for these reasons, and many, many people sign up to go through the machines (though most are put through by their parents early on to ensure what they feel is maximum benefit).
But, on the other hand, the machines have a certain error rate in producing their outputs of "improved" people. Indeed, the error rate is proportional to the degree with which the beneficial changes occur and the fastness with which they hold. Suppose, for instance, that one very popular version of the machine, perhaps Version 3.0, and the related variants 3.x, have an error rate of roughly 2-5%. A newer, more potent model, particularly in creating community cohesion, Version 4.0, has an error rate perhaps as high as 5-10%, though for a variety of reasons, not least the degree of community cohesion Version 4.0 produces, this number proves nearly impossible to determine accurately. Of course, there are a couple of related Versions 4.x as well, similar enough in most regards but not identical in every detail.
Of course, I should clarify the fruits of these errors. Suppose that errors from these miraculous machines produce people who come out horribly disturbed--literally willing to perpetrate violence including murder for patently bad reasons as a result of their defective ride through the machine. They are hostile, demanding, and self-assured with what can only be called a sense of divine right, and this sense provides for them an enormous double standard by which they judge themselves justified in committing whatever their horrors. They stand directly in the path of creating a stable global society, and indeed, they fight between each other, even close cousins, with surprising abandon, sometimes even using self-destruction to destroy their enemies.
Imagine such machines were really invented, that they really exist. The machines reliably produce very positive changes in better than 90% of people on the personal and social level, and they reliably produce dangerous, violent, reactionary, oppositional extremists at a rate of some 5-10%.
How would we respond to such machines in reality?
Would we ban them? Destroy them and the plans to build them? Would we even allow anyone to use them?
Would we recommend strongly against their use? Prevent children from being exposed to them? Plainly display and advertise the potential dangers?
Would we hold the inventor liable for the errors? Would we hold those who put their children, families, or friends liable for them?
Or would we hold them on a pedestal, protect them from any critical review, kowtow to them on personal, public, national, and international levels, acknowledge them as the sole path to virtue, and encourage their their continued widespread use, justifying the errors by saying "it's not the machine; it's the individual that doesn't represent the machine"?
These machines are real. They are ideologies, not least the religions of the world, which reliably produce, and will always produce, extremists. And when these ideologies are religions, we're in the last case, the case of protection.
If we wouldn't stand for such a machine, why should we work so hard to protect these ideologies?