Last week, the Federal Election Commission released documents related to MUR 5485, a complaint lodged against Conversagent, Inc. The complaint alledged that a program known as a conversation robot, called SmarterChild, that children could use on AOL's Instant Messenger service, had been initially programmed to be a Kerry supporter in the 2004 election. According to a news article attached to the complaint, the program, created by Conversagent, Inc., iniitally responded to an input about George W. Bush by replying, "George W. Bush is way uncool." When the robot was asked why it thought Bush was "uncool" the system replied, "I have my reasons. I really, really don't like George Bush." The complaint alleged that Conversagent made a contribution to the Kerry-Edwards campaign in violation of the ban on corporate contributions because it was advocating for Kerry.
Conversagent responded that the conversation robot had been initially programmed to aggregate the opinions of users to form what appeared to be its own opinions. Intial users loaded the program with large number of pro-Kerry, anti-Bush messages that the robot began to use a "George Bush is uncool" message. After complaints, the company altered the programming to remain neutral with regard to the presidential candidates.
The FEC, based upon a recomendation from the General Counsel's office, dismissed the case as a bona fide commercial activity and said that no further action need be taken in the case.
This is, to my knowledge, the first case of "robot advocacy." The case does present an interesting question as far as the use of technology goes in campaigning and the nature of our news preferences.
Studies and polls report that more and more people are getting their news online. It is also easy to believe that more people are getting their spin content, that is their opinion content online as well. Like many people I have a number of automated searches by Google and other sites that send me regular headlines based upon the criteria I submit.
Like many blogs, I have a Site Meter that tells me which of my articles people read more frequently. Site meters are fairly common and many search engines like Google, Yahoo and others, track the traffic flowing from the various results it gives to users like me and the millions of other we users our there. Suppose for a moment, that smarter search engines begin employing technology like Amazon's to produce other content to "recommend" to me based upon my preferences, searches and responses to searches. Could that technology then be used to "push" certain content my direction? For example, say I am getting search results about Senator Frist and his campaign for President. Knowing this, a search and recommendation program begins sending me information on Senator Hilary Clinton and her run for the White House. Perhaps the program's owner is a big Clinton Supporter, how difficult would it be to alter the programs responses to show only negative stories about Sen. Frist and only positive stories about Sen. Clinton? Or vice versa?
This is not meant to be too Orwellian, but such activity occurs now in other fields of commerce. On Amazon, I have been known to research and even buy items that Amazon recommends to me based on my purchases and the purchases of others who have bought the same items.
Some of this may be inevitable and the free speech advocate in me says that in the above case, I would just need to take better care with my search criteria and chose my selection to read carefully. After all, all programmers are human and all have certain biases that, despite careful work, given the right circumstances, these biases can creep into programming.
Returning to our FEC enforcement action, these types of complaints could become more common place. So the question becomes, is the regulation of newsbots something the FEC should get involved in? If a newsbot exhibit's a pre-programmed bias in favor of a candidate or a party, is that activity that should be regulated? If the bias is explicit, like the John Kerry loving SmarterChild robot, is that actually advocacy?
This is the slippery slope of campaign finance regulation. If I asked Fred Wertheimer, I would probably get a response that, yes, it should be regulated. But at what point do we as individuals surrender our ability to make choices for ourselves? As I said before, if I had a newsbot with such a bias, I would just have to be more careful in my questions and more selective in my choices, or use a different newsbot.
At least for now, the FEC has decided to stay out of the robot regulation business and we can all breathe a sigh of relief.