0. Preamble Hi, I'm Six Silberman. I am happy to present this work on sellers' problems in human computation markets to you today, on behalf of my collaborators Joel Ross, Lilly Irani, and Bill Tomlinson, at UC Irvine. I will read an abridged version of the paper. I hope you don't mind that I read, but if you do, don't worry, it will be over in eight minutes. 1. Context and content There is a lot of excitement about market mechanisms for human computation. Most of this research, like most HC research in general, is on the buyer or requester side. This makes sense sociologically because most of us in this room are requesters. But there are many more workers than requesters, and they also have interesting and difficult problems to solve. Often they lack the time, resources, institutional connections, or expertise to solve them, and researchers could make valuable contributions in this domain. I will talk about: workers in Mechanical Turk; some problems they have raised that we could all work on; some approaches to these problems; and some open questions. 2. The crowd and its problems Demographic surveys by Panos and by my UCI colleagues reveal a growing population of young, male, Indian Turkers earning less than 10,000 dollars a year. Almost a third of Indian workers surveyed said they always or sometimes relied on Turking income to make basic ends meet, as did about 13% of American workers surveyed. The most commonly reported motivation for Turking is to earn money, and a variety of surveys confirms the importance of money compared to other motivations, with most respondents reporting they do not Turk for fun or to kill time. A quarter of Indian Turkers and 13% of American Turkers surveyed reported that Turking is their main source of income. Requesters often aim to minimize expense at a fixed quality or maximize quality at a fixed cost, so we might expect that workers wish to secure payment for tasks with a minimum time expenditure, even if this means "gaming the system" by providing responses they know are of low quality. Fraudulent workers do appear to optimize in this way, but a reading of survey responses and forum discussions reveals a strong concern for what is "fair" and "reasonable" rather than a desire to maximize short-term personal earnings at requester expense. That is, there are honest workers and they are not rational optimizers in a narrow sense. In our paper, we review forum discussions as well as responses to surveys about workers' experiences. We identify eight recurring themes of concern that characterize -- and often complicate -- the lives of these "professional" crowdworkers. They are: uncertainty about payment; unaccountable and seemingly arbitrary rejections; fraudulent tasks; prohibitive time limits; long pay delays; uncommunicative requesters and administrators; the cost of requester and administrator errors borne by workers; and low pay. Many of these are made possible by Amazon's decision to allow requesters to reject work at will and without explanation or cost, and even to automate the process in software. 3. Approaches to seller problems Workers and requesters on mTurk Forum have made a number of Turking tools, including a list of all requesters, a script for recording your own worker history, and a client-side script to hide HITs posted by particular requesters. Lilly Irani and I built and maintain Turkopticon, a Firefox add-on and web database application that adds a drop-down interface element to the HIT listing which displays user-contributed reviews of requesters. Requesters are reviewed on four attributes motivated by the problems listed previously. Finally, CloudCrowd, launched in September 2009, aims to be a more "worker-friendly" alternative to AMT, claiming a "more efficient [worker] interface," payment through PayPal, and "credibility" ratings (rather than acceptance rates as in AMT) as the measure of worker quality. Ned Augenblick, a new economics professor at Berkeley, is building the next version of Turkopticon, which will scrape users' worker dashboards to compute effective wage data for HITs and requesters. This data will be stored in a web-accessible database and added to the HIT listing, allowing Turkers to sort HITs and requesters by effective historical wage. In addition to helping workers, the data collected in this process should give us new insights into the structure and dynamics of Mechanical Turk as a market. Alek Felstiner, a law student at Berkeley, is building the case for legal regulation of crowdwork. This is not my area of expertise at all, but if you want to read his take, it's on the Dolores Labs blog. 4. Open questions These projects are only a first step toward addressing the problems raised by workers and developing a rich understanding of human computation markets that bridges the perspectives of buyers, sellers, and administrators. Many questions remain. For example: How does database, interface, and interaction design influence individual outcomes and market equilibria? This has been explored in online auctions but not in human computation. It appears that the design of the rejection feature in AMT has enabled much of the behavior workers are frustrated by, but we have no comparative analysis to argue this definitively. What are the economics of fraudulent tasks? What decision logics are used by workers and requesters? Requesters who read HCOMP papers and workers who "game the system" may maximize financial return, while others may satisfice. What problems are being solved, and with what strategies, by different actors, and how do these shape market outcomes? What's fair in paid crowdsourcing? Economists Akerlof and Shiller argue that "considerations of fairness are a major motivator in many economic decisions" that has been overlooked in neoclassical explanations that assume economic decisions maker act rationally. They lament that while "there is a considerable literature on what is fair or unfair, there is also a tradition that such explanations should take second place in the explanation of economic events." At various public events we have heard requesters and administrators say that tasks should be priced "fairly," but fairness is difficult to define and thus to operationalize in practice. Concepts like a reservation wage, as explored in John Horton and Lydia Chilton's paper on the labor economics of paid crowdsourcing, are useful here, but do not settle the matter, which is complicated economically and culturally by the global reach of HC platforms. This question of fairness links to the question concerning the relationship between interface design and market outcomes. If considerations of fairness are key to explaining economic decision making, but fairness is constructed and interpreted through social interaction, then to understand economic outcomes in human computation systems we need an understanding of these systems as social environments. I would venture to suggest that we should not expect a system with sparse social cues to motivate fair interactions. There is work in this vein in human-computer interaction and computer supported cooperative work (CSCW), but not on human computation systems in particular. Gaps remain in our demographic understanding of AMT. For example, both ongoing studies have designed surveys with income questions where the lowest bin is "less than 10,000 dollars a year." This includes most people in India, so these surveys could be refined. As new platforms and tools come online, comparative studies in all of these areas will become possible, and longitudinal studies more feasible. I look forward to reading these studies and to seeing the software tools that are developed around them. 5. Concluding remark Human computation is bringing Taylorism to information work. If it continues to develop and is taken up broadly, it seems likely that we -- information and knowledge workers -- will all eventually become workers in HC systems, if we are not already. This should provide a good selfish reason for paying close attention to workers' experiences and standard design practices in these systems, with as diverse a methodological and conceptual toolkit as possible, over the long term. Ultimately the question we are asking here is very simple: are we, as designers and administrators, creating contexts in which people will treat each other as human beings in a social relation? Or are we creating contexts in which they will be seduced by the economically convenient fiction alluded to by the phrase "artificial artificial intelligence," that is, that these people are machines and should be treated as such? Thank you.