To what extent are your health choices influenced by fake social media accounts? Probably more than… [+] you think.
E-cigarettes have generated numerous debates and seemingly endless controversies, despite being widely used by the public for a relatively short time. Comparisons to the irrefutable harm caused by tobacco smoking have quickly become saturated by simple “well they must at least be better than cigarettes” statements, with public health organizations scrambling to try to keep the focus on their central message—never starting is better than having to quit, whatever your method of nicotine consumption.
Organizations which previously may have relied on newspaper ads, billboards and leaflets in doctor’s surgeries to communicate with the public now have to compete with a deluge of online content of varying quality and scientific accuracy, some of which may not simply be individuals voicing their opinions and thoughts.
A new study by researchers at San Diego State University (SDSU) was originally designed to use data from Twitter to study the use of e-cigarettes in the U.S., the type of people using them and their perceptions of e-cigarettes. However, while they were analyzing the data, they came across something surprising. Odd tweets containing confusing and illogical content about e-cigarettes and vaping, tweets which they eventually concluded had been made by bots. They re-classified the tweets, determining that 70% of the tweets in their dataset was produced by bots.
“Robots are the biggest challenges and problems in social media analytics. Some robots can be easily removed based on their content and behaviors, but some robots look exactly like human beings and can be more difficult to detect,” said Ming-Hsiang Tsou, founding director of SDSU’s Center for Human Dynamics in the Mobile Age, and co-author on the study.
This comes as Twitter responded to mounting pressure regarding the robustness of the platform by pledging to crack down on controlled and fake accounts, suspending 70million of them and introducing new measures to identify spam and abuse on the platform. But how or if they plan to tackle the spread of information considered scientifically dubious or nonsensical regarding public health issues such as vaccination or concerns about genetically modified foods, is less clear.
The research publish