Confirmation bias and common social media algorithms

The most accurate illustration of the consequences of what is nowadays known as “confirmation bias” comes from the French physiologist, Claude Bernard, in his book, An Introduction to the Study of Experimental Medicine, first published in 1865. Bernard says (translated from French):
Men who have excessive faith in their theories or ideas are not only ill prepared for making discoveries; they also make very poor observations. Of necessity, they observe with a preconceived idea,
and when they devise an experiment, they can see, in its results, only a confirmation of their theory. In this way they distort observation and often neglect very important facts because they do not further their aim. This is what made us say elsewhere that we must never make experiments to confirm our ideas, but simply to control them.
Excerpt from Claude Bernard, An Introduction to the Study of Experimental Medicine, Dover Publications, p. 38.
This is probably one of the most elusive kinds of biases a burgeoning researcher might be oblivious to and deserves to be scrutinized when discussing any scientific claim or the significance of an observation/finding.
For commoners, social media platforms are contributing to this type of bias by suggesting to its users content which mostly resembles their previously viewed content, thus limiting their exposure to opposing views and alternative explanations!
This is probably the reason behind the unprecedented increase in the number of delusional people who talk very confidently of their ideas/opinions and who firmly believe that the said ideas are widely adopted by the so called “experts in the field”!
Developers working on an algorithm which promotes unverified users-contributed content should account for and try to avoid promoting this type of bias or risk losing the masses to conspiracy theorists, false news, and all other sorts of input which further destroys the sanity of a society.