Home About RSS

Robotics and the Law

Experiments with socialbots

First: hi. The idea is that I will post sporadically on topics relating to Robots, Freedom, and Privacy.

At the east coast hacker conference HOPE 9 the weekend of July 13-15, 2012, Pacific Social Architecting Corporation’s Tim Hwang reported on experiments the company has been conducting with socialbots. That is, bot accounts deployed on social networks like Twitter and Facebooks for the purpose of studying how they can be used to influence and alter the behavior and social landscapes of other users. Their November 2011 paper (PDF) gives some of the background.

The highlights:
- Early in 2011, Hwang conducted a competition to study socialbots. Teams scored points by getting their bot-controlled Twitter accounts (and any number of supporting bots) to make connections with and elicit social behavior from an unsuspecting cluster of 500 online users. Teams got +1 point for mutual follows; +3 points for social responses; and -15 if the account was detected and killed by Twitter. The New Zealand team won with bland, encouraging statements; no AI was involved but the bot’s responses were encouraging enough for people to talk to it. A second entrant used Amazon’s Mechanical Turk; another user could ask it a direct question and it would forward it to the MT humans and return the answer. A third effort redirected tweets randomly between unconnected groups of users talking about the same topics.

- A bot can get good, human responses to “Are you a bot” by asking that question of human users and reusing the responses.

- In the interests of making bots more credible (as inhabited by humans) it helped for them to take enough hours off to seem to go to sleep like humans.

- Many bot personalities tend to fall apart in one-to-one communication, so they wouldn’t fare well in traditional AI/Turing test conditions - but online norms help them seem more credible.

- Governments are beginning to get into this. The researchers found bots active promoting both sides of the most recent Mexican election. Newt Gingrich claimed the number of Twitter followers he had showed that he had a grass roots following on the Internet; however, an aide who had quit disclosed that most of his followers were fakes, boosted by blank accounts created by a company hired for the purpose. Experienced users are pretty quick to spot fake accounts; will we need crowd-based systems to protect less sophisticated users (like collaborative spam-reporting systems)? But this is only true of the rather crude bots we have so far. What about more sophisticated ones? Hwang believes the bigger problem will come when governments adopt the much more difficult-to-spot strategy of using bots to “shape the social universe around them” rather than to censor.

Hwang noted the ethical quandary raised by people beginning to flirt with the bot: how long should the bot go on? Should it shut down? What if the human feels rejected? I think the ethical quandary ought to have started much earlier; although the experiment was framed in terms of experimenting with bots in reality the teams were experimenting on real people, even if it was only for two weeks and on Twitter.

Hwang is in the right place when he asks “Does it presage a world in which people design systems to influence networks this way?” It’s a good question, as is the question of how to defend against this kind of thing. But it seems to me typical of the constant reinvention of the computer industry that Hwang had not read - or heard of - Andrew Leonard’s 1997 book Bots: The Origin of New Species, which reports on the prior art in this field, experiments with software bots interacting with people through the late 1990s (I need to reread it myself). So perhaps one of the first Robots, Freedom, and Privacy dangers is the failure to study past experiments in the interests of avoiding the obvious ethical issues that have already been uncovered.

wg

Leave a Reply

You must be logged in to post a comment.