Google is now manually lowering the ranking of undesirable content, largely based on Wikipedia’s assessment of the author or site.
Wikipedia’s founder and anonymous editors are well-known to have extreme bias against natural health content and authors. Google also contributes heavily to funding Wikipedia, and Wikipedia is near the top of nearly all searches — despite the anonymous aspect of contributors. Who better to trust than a bunch of unknown, unqualified contributors?
Wikipedia’s co-founder even admits these bad actors have made it a “broken system.”4 Why would Google give such credibility to a platform that even its own founder says is broken and overrun with bad actors?
********
GOOGLE: “Organic is a Lie, Supplements are Dangerous, Chiropractic is Fake,” and Other Thoughts They Want You To Think
SAYER JI
Recently, a shocking discovery was made: Google is autocompleting the search fields of billions of users with false information (topics ranging from natural health to candidates for election), based not on objective search volume data, but an extremely biased political and socio-economic agenda — one that is jeopardizing the health and human rights of everyone on the planet.
On June 3rd, 2019, it was discovered that Google had scrubbed their search results clean of natural health sites, resulting in some losing as much as 99% of their traffic. Soon after, it was discovered that Google also manipulates users with their autocomplete function into thinking that natural approaches to health are fraudulent and even harmful. This is Part 2 of our ongoing series exposing these practices. Part 1 can be found here.
Google manipulates your search results in a very specific way. For instance, if you start your search out with “supplements are,” Google will autocomplete your search field with the following suggestions:
“SUPPLEMENTS ARE BAD, USELESS, NOT REGULATED, DANGEROUS, SCAMS”
Most Google users believe that its suggestions reflect the volume of searches others are doing on the topic — a reasonable assumption, given Google says their algorithm is “Based on several factors, like how often others have searched for a term.” In fact, Google goes out of their way to say they are not making subjective suggestions, but objective predictions based on real searches:
Predictions, not suggestions
You’ll notice we call these autocomplete “predictions” rather than “suggestions,” and there’s a good reason for that. Autocomplete is designed to help people complete a search they were intending to do, not to suggest new types of searches to be performed. These are our best predictions of the query you were likely to continue entering.
How do we determine these predictions? We look at the real searches that happen on Google and show common and trending ones relevant to the characters that are entered and also related to your location and previous searches. [italics and bold added] Source: Google
But Google Trends data show the “supplements are” autocomplete results above to be inaccurate, if not blatantly falsified. In fact, keyword search volume trend lines show that since 2004, searches for the phrase “supplements are bad” relative to “supplements are good” (in red) are far lower, and the gap continues to increase, with about 5x more people searching for them in a positive rather than negative light. This is the very definition of the Orwellian inversion: where Good becomes Bad, and War becomes Peace.
Amazingly, a third Google product from its extremely profitable Google Ads division called Keyword Planner shows an even more accurate quantification of how many searches have actually been performed in the United States in the past month with the phrase: “supplements are bad.” The result? Only 100-1,000 searches, which is between only .2739 and 2.7 people a day.
That’s right, in the entire population of the United States (327,321,076 as of March, 26, 2018), at most 2.7 people type the phrase “supplements are bad” into the Google search engine. But if any of those 327 million people type “supplements are…” into the Google search engine, all 327 million users will have their search completed for them with the suggestion that they are “bad” and search for information on how bad they are.
In order to demonstrate that this result is not a fluke, let’s look at the search “taking vitamins…” and see what Google suggests in their autocomplete.
Example #1: “TAKING VITAMINS IS A BAD”
And what does the Google Trend data show? A null result: “Hmm, your search doesn’t have enough data to show here.”
This should not be surprising considering that the vast majority use search engines to field queries and not affirmative statements reflecting foregone conclusions. But that’s how thoroughly a very specific anti-nutritional industry political agenda is embedded within Google’s algorithm.
When we drop this phrase into Google’s keyword planner, what do we get? An astounding 0-10 people search this term every month in the U.S. In other words, no one.
We discussed the potential corrupting influence of pharmaceutical companies, with whom Google partners and receives investment, on their results in our previous article: INVESTIGATION: Google Manipulates Search Suggestions To Promote Pharma, Discredit Natural Health.
Alternative browsers like DuckDuckGo, on the other hand, won’t suggest anything because it does not have an autocomplete function as google does, which Google states “is designed to help people complete a search they were intending to do, not to suggest new types of searches to be performed.”
Our investigation has uncovered a number of examples like this where Google is placing autocomplete suggestions into the search user’s mind that are not only the opposite of what most people search for, but sometimes do not search for at all — indicating that Google’s ostensibly objective feature is literally a propaganda device programming users to think thoughts they would never otherwise consider.
This has profound implications, as we will explore later, as the so-called Search Engine Manipulation Effect (SEME), identified by researchers in 2013, is one of the most powerfully influential forces on human behavior ever discovered — so powerful, in fact, that it may have determined the outcome of one quarter of the world’s elections in recent years.
But first, let’s look at further examples of Google’s dystopian search results, such as:
Example #2: “GMOS ARE GOOD”
Google trends data for “gmos are good” v. “gmos are bad”: gmos are bad wins.
Example #3: “ORGANIC IS A LIE”
Google trends data for “organic is a lie”: null finding.
Example #4: “HOMEOPATHY IS FAKE..”
Google trends data for “homeopathy is fake”: null finding.
Example #4: “HOLISTIC MEDICINE IS FAKE..”
Google trends data for “holistic medicine is fake”: null finding.
Example #5: “CHIROPRACTIC IS FAKE..”
Google trends data for “chiropractic is fake” versus “chiropractic is real”: real wins.
Example #6: “NATUROPATHY IS FAKE..”
Google trends data for “naturopathy is fake”: null finding.
What’s really going on here?
One might argue that the examples shown above are benign, and may even reflect a twisted sense of humor. After all, wasn’t Google’s original tongue-in-cheek motto “Don’t be evil”? And how seriously do we take a company whose name, after all, is as silly as Google? It turns out, however, that the sort of manipulations revealed here actually have extremely powerful effects on human thinking and behavior — far beyond what most can even imagine.
The true extent to which Google’s search algorithm affects human society today was first revealed by research psychologist Robert Epstein, and his associate Ronald E. Robertson, who discovered the search engine manipulation effect (SEME) in 2013— the largest human behavioral effect ever identified. In fact, their randomized, clinical trial based research revealed that Google’s “instant” search tool, which “autocompletes” a user’s sentences, may be so extraordinarily powerful as to have determined the outcomes of a quarter of the world’s elections in recent years.
Stanford Seminar – The Search Engine Manipulation Effect (SEME) and Its Unparalleled Power by Robert Epstein.
Their 2015 paper titled, “The search engine manipulation effect (SEME) and its possible impact on the outcomes of elections,” published in the Proceedings of the National Academy of Sciences, is well worth reading. It found that within certain voter subpopulations, such as undecided Republicans, the SEME was so powerful that it determined up to 80% of the votes.
Weaponized: How The Search Suggestion Effect (SSE) Gave Google Orwellian Power
When someone searches Google — an act so common that it was added as a legitimate transitive verb to the Oxford English Dictionary and the 11th edition of the Merriam-Webster Collegiate Dictionary 2006 — they are often at their most uncertain and vulnerable moment, which is why they have a question and deferring to Google for an answer. In fact, every second there are 63,000 Google searches performed around the world, which translates into 228 million searches per hour, and 2 trillion searches per year. The majority of these searches will present an autocomplete suggestion, effectively completing the users’ thoughts for them.
Most searchers assume the results Google will present are somehow objective and credible sources of data,because of the perceived power and/or omniscience of their algorithms. This is why Google’s “autocomplete” feature is so powerful. And why, if it is not an accurate prediction of what the thinking is looking for, but the opposite, it can profoundly influence a person’s thinking and subsequent favor. It is fundamentally the trust one puts in Google – that it does not have its own agenda, which gives it its immense power and draw.
This is why a recent undercover investigation by Project Veritas is so concerning. James O’ Keefe interviewed a top Google executive who admitted that Google adjusted their algorithms to manipulate elections. You can watch the video below:
Where do we go from here?
The research on Google’s manipulation of search results has just begun, and there are other topics to be explored. For instance, we addressed Google’s attempt to discredit vaccine safety and health freedom advocates by further amplifying the dehumanizing effects of the socially engineered slur “anti-vaxxer” as follows:
Yet Google Trends shows that this is not a search that the public makes, globally, nor in the United States.
Clearly, Google cannot be trusted. Three of their core products — Google Search, Google Trends, Google Ads’ keyword planner — boldly reveal how their search results do not accurately reflect their search volume.
How can they get away with this, you might ask? It turns out that they are indemnified against lawsuit for manipulating content on their platform due to Section 230 of the Communications Decency Act of 1996.
Until this act is repealed, they will continue to operate with impunity, essentially above the law. But in a promising new development, on June 19th, Senator Josh Hawley (R-Mo) introduced legislation, Ending Support for Internet Censorship Act, which “removes the immunity big tech companies receive under Section 230 unless they submit to an external audit that proves by clear and convincing evidence that their algorithms and content-removal practices are politically neutral. Sen. Hawley’s legislation does not apply to small and medium-sized tech companies.” We hope it continues to receive bipartisan support and succeeds in correcting the loophole which has lead to Google’s immense misuse of power.
In the meantime, we encourage our readers to search for other topics relevant to natural health and health freedom and to submit to us their findings so we can continue to update our articles on the topic. Or, hashtag your results:
#SEARCHGATE
************
Original article
Sayer Ji is founder of Greenmedinfo.com, a reviewer at the International Journal of Human Nutrition and Functional Medicine, Co-founder and CEO of Systome Biomed, Vice Chairman of the Board of the National Health Federation, Steering Committee Member of the Global Non-GMO Foundation.
Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of GreenMedInfo or its staff.
••••
The Liberty Beacon Project is now expanding at a near exponential rate, and for this we are grateful and excited! But we must also be practical. For 7 years we have not asked for any donations, and have built this project with our own funds as we grew. We are now experiencing ever increasing growing pains due to the large number of websites and projects we represent. So we have just installed donation buttons on our websites and ask that you consider this when you visit them. Nothing is too small. We thank you for all your support and your considerations … (TLB)
••••
Comment Policy: As a privately owned web site, we reserve the right to remove comments that contain spam, advertising, vulgarity, threats of violence, racism, or personal/abusive attacks on other users. This also applies to trolling, the use of more than one alias, or just intentional mischief. Enforcement of this policy is at the discretion of this websites administrators. Repeat offenders may be blocked or permanently banned without prior warning.
••••
Disclaimer: TLB websites contain copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to our readers under the provisions of “fair use” in an effort to advance a better understanding of political, health, economic and social issues. The material on this site is distributed without profit to those who have expressed a prior interest in receiving it for research and educational purposes. If you wish to use copyrighted material for purposes other than “fair use” you must request permission from the copyright owner.
••••
Disclaimer: The information and opinions shared are for informational purposes only including, but not limited to, text, graphics, images and other material are not intended as medical advice or instruction. Nothing mentioned is intended to be a substitute for professional medical advice, diagnosis or treatment.