# 1. Opening [1200 words] Thank you all for being here. Some of you came a long way. ## Scope I'll focus on MT today. But I want to participate in conversations about computationally mediated labor markets broadly -- MT, 99designs, Uber, TaskRabbit, and Elance are all in scope here. I'll call them "online labor markets" just so I don't trip over my tongue. ## You might care about this if... You might care about what I'm going to say today if you care about user experience and you think workers are "real" users. You might care if you care about the broader social impacts of the systems we make in HCI and the computing industry. You might care if you care about the distribution of risks, costs, and benefits produced by new technology and technologically enabled practices. And you might care if you care broadly about the relationship between human values and technology, or about what kind of world we are building with technologies. ## The most important things Just in case you have to leave early, here are the seven things I really want to say. The first is empirical. Some workers in online labor markets are casual or transient. Others are professional, and rely on the income they earn to pay rent, buy food, and make ends meet. The second is part empirical, part theoretical. Workers are very rarely the narrowly selfish, short-sighted so-called rational actors of classical economic theory. In HCI we know this. But the idea of the rational actor has influenced computing system design and research through game theory and artificial intelligence. The ideas that shape market design also shape outcomes. But when people act more selfishly, the outcomes tend to be worse. People will accept this if they don't have better alternatives, but that doesn't make them good. And we have better models of people. "Bounded rationality" is better than the classical rational actor, but it doesn't quite cover it. In the dissertation I talk about "situated rationality", which is like bounded rationality but acknowledges that people are shaped by their social contexts. I don't think we need to resist all formal modelling of people. It is reductive and always incomplete, but I will quote Bonnie and her coauthor Victor Kaptelinin on this topic. They write: "If we refuse to answer the big questions [including the broadest question of what it means to be human], they will be answered by others whose answers we might not like very much." I think this is happening now in the shaping of online labor markets. The third point is also part empirical and part theoretical. Like other markets, online labor markets are not isolated, homogeneous, "frictionless" exchanges with perfect competition or infinite choice. Some barriers to entry are uniquely low in some online labor markets, but this does not make them "frictionless". For example, for workers there are significant "switching costs" between markets. Switching markets means building a reputation from scratch. Neither is the "market of markets" frictionless. There are barriers to starting a market. Maintaining and running a market involves more or less constant attention from highly trained people with expensive skills. Generally, markets can be seen as interlinked parts of large, complex polycentric economic systems. Action situations in markets are linked to one another, even across platforms, through their consequences. Interdependence is the rule, even if it is often invisible. The next three points are implications for design and operation. The fourth point is that if market designers want to produce sustainable arrangements for producing quality work, they should take concrete steps to more substantively, systematically, and "ongoingly" address workers' concerns. For now the typical response to worker concerns seems to be "take it or leave it." A more collaborative approach is possible and would probably yield better outcomes for workers and customers. The fifth point is that professional workers -- the workers who rely on the income they earn through the market to meet basic needs -- are overlooked allies in improving market outcomes for workers and customers. These workers are invested in producing sustainable arrangements for producing quality work. They should be considered first-class stakeholders in market design, just like customers. The sixth point is that we may need different organizational models to do this. The two most common models -- the venture-funded startup and the public corporation -- may not fit. Old models like the worker-owned cooperative and new models like the B Corporation may work better. The seventh and last point is that this is research! HCI and CSCW researchers could develop and drive a practice-oriented agenda that integrates software practice, empirical research, theory development, and questions about values -- like "Where are we going in computationally mediated work? Who gains and who loses? Is this desirable? What should be done?" This will require collaboration across disciplines, sectors, and stakeholder groups -- but we know how to do that. ## The rest of the talk The rest of the talk will go like this. First I'll talk about Mechanical Turk, Amazon's crowd work market. Then I'll talk about Turkopticon, which is a system Lilly Irani and I built in 2008 and still maintain. It's kind of like Yelp for MT employers, and a lot of workers use it to review employers. Then I'll talk about some theory. And then I'll talk briefly about what I think this means. I'll save discussion of what I'm working on next for the Q&A. # 2. MT [1800 words.] So, Mechanical Turk. ## The basic process The basic process goes like this. [Slide: requesters post tasks, etc.] First, employers, called requesters for legal reasons, design their tasks, including the price, and post them to the site. Amazon charges 10 to 30 percent on top. Then workers do the tasks. Then the requester looks at the work submitted by each worker and decides whether to approve or reject it. Workers are paid for approved work and not paid for rejected work. Legally, workers are independent contractors, not employees, so they are not entitled to minimum wage, overtime pay, health insurance, or any other benefits of traditional employment. ## Tasks What kinds of tasks get posted to AMT? Short answer: [Slide: list of categories] Some of the big categories are: search result relevance evaluation; transcription and translation; writing; content moderation; data cleaning and metadata creation; usability testing; and behavioral and market research. ## Requesters Who are the requesters? Some are big companies. [Slide: requesters list] Google, Twitter, and Amazon itself all use, or have used, AMT directly for various things. Others are startups. LinkedIn and the US FDA have contracted smaller companies to run tasks for them on AMT. InfoScout is a small market research company. They give customers of big stores like Target "points" for taking pictures of their purchase receipts. Then they pay Turkers to transcribe the receipts. Then they sell the information from the receipts back to the companies that run the stores. Some requesters are intermediaries for customers. Two of the most prolific requesters, CrowdFlower and CrowdSource, are in this category. (CrowdFlower doesn't post to AMT any more, but that's another story.) Some requesters are academics. Some of these are social scientists running surveys or behavioral experiments. Others are computer scientists building "crowd powered systems". For example, University of Pennsylvania computational linguist Chris Callison-Burch used AMT to translate a lot of wikipedia entries into different languages. ## Workers Who is doing this work? Here's one answer. [Show "Faces of Mechanical Turk", tinyurl.com/facesofamt.] Of course this is not representative, but it's a start at getting a sense for the diversity of Turkers and the relative lack of diversity of their motivations. Most of them say "I Turk for cash." One question this raises is: aren't there better ways to earn money? This video starts to answer that question. [Show "Turking for Respect", tinyurl.com/turkingforrespect.] I like this video, but it is misleading in at least one important way, which I'll get to in a minute. We do have some quantitative data about workers. [Slide: demographic data] Panos Ipeirotis at NYU has a demographic survey running. Respondents to his survey were 75-80% US-based, with the rest mostly from India. (You can get paid in dollars, rupees, or Amazon gift card points.) Overall, workers were about half women and half men. But the US worker pool has more women and the Indian worker pool has more men. About half the workers were born in the 1980s. The median household income for US workers was $50K/yr; for Indian workers, $10K/yr. Panos has data on household size and some other things too. But I want to know about people's relationships to Turking. We have old data on this, from Joel Ross' 2009 survey, where 20% of respondents said they "sometimes or always" needed Turking income to make ends meet. The number was higher for Indian respondents. [Slide: most work is done by a small part of the worker population] But since then we've realized that most of the work in the market is done by a small fraction of the workers. This is a common pattern on the internet. But these workers relate differently to the market than the others. They are almost certainly the same 20% or so of the worker population that relies on Turking income to make ends meet. These are the serious Turkers. They Turk many more hours per week than casual workers. They participate in online communities, where they teach each other about Turking and share information about good and bad requesters and tasks. They don't cheat requesters. In fact, they argue about norms and discipline each other for doing things they think are unethical, like sharing sensitive survey information. They build software for each other. Serious Turkers use a lot of specialized software. With the exception of Turkopticon I think it is all built by Turkers. They give each other money in emergencies. They give each other emotional support. For example, here is a forum thread about dealing with anxiety, depression, and PTSD in Turking life. [Show sad pandas thread, tinyurl.com/mtgsadpandas.] They also help requesters improve their task designs and review process *for free*. Of course, there are cliques and spats and grudges and everything else that all communities have too. One way to look at this is, why don't these serious Turkers spend this time on paying tasks? But another way is, they are professionals in a professional community. [Slide: serious Turkers contribute a lot of unpaid labor to create an effective and supportive professional community] In any profession a lot of unpaid labor goes on to make the paying stuff go well. Turking is no different. And, importantly, the Turkers understand this. They understand that they are part of a professional community. Others taught them how to Turk. Others gave their time to make, maintain, and improve the software they use every day. Others send them messages at odd hours when good tasks go up. Others support them practically, emotionally, and even financially in bad times. In a 2007 talk about the continued relevance of the humanities in a world increasingly shaped by the natural sciences and economics, the postcolonial theorist Gayatri Spivak said: "For me, the 'double bind' is a general description of all doing, all thinking as doing, all self-conscious living. Contradictory instructions come to us at all times. We learn to listen to them and remain in the game. This is the double bind. "The swing between imagination and self-interest is the biggest double bind in our lives, individually and collectively. In English, its name is ethics. Theories of ethics that tell us we must take the facts into consideration and make a rational choice swing more toward self-interest. Those that say we are defined and determined by others swing more toward the imagination. As the humanities become less and less relevant in our educational systems, that second swing can be understood less and less." This brings us to one of the big questions Bonnie and Victor were, I think, talking about in the quote I read earlier, which is, quite simply, "What are we?" What does it mean to be human in this technological world we have made? I won't say that I have seen the Turkers and I know now, empirically, by the power of social science, that we are all, as Spivak puts it, defined and determined by others. It is not a hypothesis to be confirmed or refuted. The question is, how do you want to see it? What model of the world do you want to guide your action? As the sociologists know well, theories have consequences. It makes a difference what we believe. Where do we draw the boundary around ourselves? [Pause.] So, how much do the professional Turkers make? I've seen them talk about earning $40/day regularly. I've seen experts -- Turkers who have done literally millions of HITs -- talk about a daily goal of $100. I've seen them talk about $400 days. There are not many people in this category, but it's important not to forget about them, especially while we are busy being incredulous that CrowdFlower co-founder Lukas Biewald actually said in public while being recorded that he paid workers, on average, $2-3/hr. Both of these stories are important. [Slide: $2/hr - $400/day] I hear that the super Turkers don't really do surveys. But I think demographic studies are still important. I think design and policy conversations could be well informed by data about the distribution of wages, experience (in terms of years and number of tasks completed), participation in worker communities and relation to them, use of specialized software, and reliance on Turking income. [Slide: wages, experience, community participation, ...] There are probably also things Turkers would want to know about themselves collectively. A community-engaged approach to research might be able to produce better data for workers and researchers than survey tasks posted to AMT. ## Complications [500 words] There are complications to the basic Turking process. [Slide: rejections; scale, communication; complexity, expectations; distrust] [Rejection, 190 words.] Some come from requesters' ability to reject work. Requesters can reject work for any or no reason, and workers have no recourse within AMT against rejections they think were mistaken, unfair, or malicious. Some requesters do reject work with the idea of getting it for free. And some workers do cheat. As a result, complex quality control and reputation schemes have evolved around the rejection feature on both sides of the market. These schemes don't always work as expected, producing unexpected rejections and angry workers. [Scale, 170 words.] These tensions are rarely resolved, because the scale of work on AMT makes meaningful communication between workers and requesters hard. Requesters often come to AMT with the expectation that they will post their task, go away, and come back and get the results without any interaction with workers. One requester Lilly interviewed said this: "You cannot spend time exchanging email [with workers]. The time you spent looking at the email costs more than what you paid them. This has to function on autopilot as an algorithmic system...and integrated with your business processes." One requester responded to a bad review on TO with this comment: "It is not cost effective to respond to each and every inquiry or complaint; we're trying to be dirt cheap here, and I'm frankly paid too well to spend time on email." But workers want requesters to be responsive when problems arise, because this improves work quality and their odds of getting paid. [Complexity, learning curve, 130 words.] New requesters often don't know about the problems that can come up. So they don't plan for them and don't make time to communicative. When they do respond, they are sometimes hurried and terse, and sometimes rude. This leads to more problems. Taken together, these complications produce a climate of uncertainty, distrust, anxiety, and even hostility between workers and requesters, and sometimes between workers. ## 3. TO [800 words.] So let's talk about reviews, which means talking about TO. [Origin story and initial design, 230 words.] Turkopticon started as a project in a tactical media class taught by Beatriz da Costa in 2008. Lilly Irani had heard about MT and posted some tasks. In the first she asked workers to write poems. In the second she asked them to write a Turkers' Bill of Rights. We coded the 67 answers to the Bill of Rights task. Some of them were very long and detailed. We found eight common concerns: [Slide: eight concerns.] Uncertainty about payment; unaccountable and seemingly arbitrary rejections; fraudulent tasks; prohibitive time limits; long pay delays; uncommunicative requesters and administrators; the cost of requester and administrator errors borne by workers; and low pay. Many of these were made possible by the rejection feature. MT had a crude worker reputation system in the form of approval rates. We thought it should have a reputation system for requesters, so we made a proof of concept. We hoped Amazon would build one into MT. Instead, people started using TO. Now the MT team refers workers to us instead of building their own. [Turking with TO, 190 words.] Turkopticon adds complexity to the Turking process. The simplest way for Turkers to use Turkopticon is to see aggregate review data while looking at the AMT task list. If they want more information, they can click through to the reviews. This takes time but is often worth it. "I should have read the reviews here before working for this requester" is a common statement in bad Turkopticon reviews. Then workers can also leave their own reviews. [Good outcomes, 100 words.] Turkopticon appears to have changed the decision making process in approving or rejecting work, at least among requesters who know about it. One academic requester told me that having a bad Turkopticon reputation made it effectively impossible to attract workers who would submit quality work. A team of economists at the University of Minnesota ran two experiments on Turkopticon last year. There were ethical issues with one of the experiments, but the findings were interesting. They found that requesters with bad reputations were five times more likely to reject work, and that their tasks took longer to get done. [Outcomes and complications: the review form, 125 words.] All is not perfect in TO-land. One source of recurring disagreements and wasted time for workers is the review form. We designed it in 2008, based on the coded responses to the Bill of Rights survey. It has four main quantitative attributes: pay, fairness, speed of pay, and communicativity. While the subjectivity of these scales was a strength in Turkopticon's early days, when they sparked discussion about, for example, what is good pay, the usefulness of that discussion seems to be declining. Workers now seem to want more objective measures like "How much did the task pay?" "Were you approved or rejected?" "How long did the requester take to review your work?" "If you tried to communicate with the requester, did they respond?" They don't want to argue about the subjective scales any more. [Outcomes and complications: identity, harassment, 40 words.] Turkopticon is also afflicted by the harassment, profanity, trolling, abuse, and general incivility that have become common online in the last decade. We've taken some steps to deal with this, but these measures are very far from perfect. [Evolution, 80 words.] There are also technical problems that we deal with as they arise -- I can talk about them if anyone is interested. [Management (incl. tech support, community management, legal threats, IRB complaints), 20 words.] We also spend time giving technical support and variously ignoring or fending off legal threats and IRB complaints from annoyed requesters. ## 4. Theory [1500 words.] Current online labor markets give operators and requesters more power than workers. Giving workers more power could improve working conditions and the work itself. [Adv. to "rational actors in perfect markets" slide.] But computing as a field doesn't seem to have theory or method for understanding the distribution of power or evening it out. Many crowd work researchers and practitioners take up -- sometimes implicitly -- elements of an old view of humans as so-called rational actors. I summarize this view in eight propositions. [Slide: preference given and fixed] First, people and firms -- "actors" -- have given, fixed, rational, and mathematically "well behaved" preferences among outcomes. [Slide: economic actors maximize] Second, individuals maximize their own personal happiness, or utility, and firms maximize profits, subject to the constraints imposed by their budgets and other resources. [Slide: actors act freely] Third, people and firms act independently. That is, they choose freely among the options presented to them according to their individual preferences, which are unaffected by the preferences of other actors, the structure of the market, or the options on offer. There is no power or coercion in market exchanges. [Slide: complete information] Fourth, people and firms make choices with complete information about all available choices. [Slide: efficient markets] Fifth, markets are efficient aggregators of information. Even when actors don't have complete information, markets do. [Slide: no barriers to entry] Sixth, there are no (or low) barriers to entry for new firms. [Slide: perfect competition] Seventh, as a result of low barriers to entry, there is "perfect competition," or at least nearly so, and all firms are "price takers." No firm can influence prices of the goods they sell. [Slide: Pareto-optimality] Eighth and finally, markets described by the above propositions produce Pareto-efficient or Pareto-optimal outcomes. That is, they induce actors to engage in all mutually beneficial exchanges. Once Pareto-optimality has been reached, no more exchanges can be made without making at least one party worse off. Few working economists today still believe in this model. But they are often still taught to undergrads, so they have some influence over discourse on economic life and the responsibilities of economic actors among computing practitioners and researchers. This model is therefore still sometimes used, if not always rigorously, to evaluate, explain, or justify existing market outcomes, or arrangements. For example, crowd work requesters -- and some researchers -- have argued that the relatively low wages available to crowd workers -- or the other conditions in existing crowd work arrangements that have been listed in criticisms of the industry -- are unproblematic because nobody forces crowd workers to participate in crowd work. This argument proposes that if workers find requesters too cheap, or working conditions inadequate, they are free to find other work. Because many have not done so, they must be continuously and "freely" choosing to participate in crowd work, and thus the pay and working conditions generally must not be problematic. These arguments are less compelling in light of more recent economic research, which I'll summarize in eight different propositions. [Slide: preferences socially constructed] First, preferences are not given at birth but are socially and culturally constructed. They are not fixed but change over time. This change is owed partially to the influence of other individuals and society broadly. Preferences are not complete; nor are they always mathematically well behaved. Their logical coherence is confounded by a variety of cognitive biases and limitations. And people in longstanding conditions of deprivation or oppression may adjust their preferences to accept and even prefer circumstances they would previously have rejected. [Slide: actors "satisfice"] Second, people and firms may not maximize utility or profits but rather "satisfice," aiming to achieve a level of happiness or profitability above some threshold of acceptability and then declining to expend additional effort to improve the situation. And people's "objective functions" may incorporate multiple criteria, including "other-regarding preferences" such as fairness. [Slide: actors face constrained choices; subject to power and exercise power] Third, people do not choose freely. Rather, they are constantly subject to power. [Slide: limited information] Fourth, people and firms have limited information about their choices, and their ability to collect and process information is limited. [Slide: herd behavior and other "irrational" phenomena shape market dynamics] Fifth, markets are not always efficient aggregators of information. Because of human cognitive biases and limitations, markets are subject to a broad range of apparently irrational dynamics such as speculative bubbles and panics. [Slide: market power] Sixth, there are not always low barriers to market entry for new firms; market power exists. Some firms are "price takers" in competitive markets; others have the power to set prices. Firms can also lobby regulators to protect their interests, using other types of power to raise barriers to entry for new firms beyond what is possible with market power alone. [Slide: other criteria for evaluating market functioning, e.g., fairness] Seventh, Pareto-optimality is not the only way to evaluate market outcomes. For example, an awareness of, if not desire for, fairness appears to be a human cultural universal. [Slide: no invisible hand] Eighth and finally, the violation of the conditions of perfect information and perfect competition means that even Pareto-optimality is not typically achieved. The notion that an "invisible hand" guides the actions of self-interested actors to lead to the greatest good for all is, regrettably, an appealing but ultimately misleading fiction. [Slide: institutions shape outcomes] Researchers have found that economic and social life, instead of being seen as taking place within separate spheres with their own rules -- such as the market, the family, and the state -- that interact only in prescribed and idealized ways (e.g., "government regulates the market"), can be more realistically understood as occurring within distinct but interlinked institutional settings. [Slide: institutions are "the prescriptions that humans use to organize all forms of repetitive and structured human interactions"] Institutions are "the prescriptions that humans use to organize all forms of repetitive and structured human interactions including those within families, neighborhoods, markets, firms, sports leagues, churches, private associations, and governments at all scales." These prescriptions have a common structure that I won't talk about here. [Slide: situated rationality] Actors in institutions are not "rational actors". They do not possess complete information. Nor are they merely "boundedly rational," approximating or at least striving for full rationality but constrained by limited information, information processing capacity, and cognitive biases. Rather, actors are situatedly rational: they do calculate and consider the actions of others, but their calculations and even their preferences are shaped by both the immediate situation -- including their estimations of others' preferences and their understandings of institutional prescriptions governing their situation -- and their personal histories, including ideas about appropriate conduct or desirable outcomes that they may have acquired elsewhere. [Slide: institutional situations are interlinked, creating polycentric systems] Institutions populated by situatedly rational actors are interlinked with one another. These interlinkages create complex "polycentric" systems that defy simple categories such as market, government, family, and church. Polycentricity denotes the condition that arises when organizations with formally independent decision making centers are interlinked in practice by the consequences of the actions taken at each center. [Slide: crowd work is a polycentric system populated by situatedly rational actors] Crowd work can be seen as one such polycentric system, populated by situatedly rational actors who act based on a combination of things: enlightened self-interest; sophisticated but imperfect and evolving models of the market and its contexts; fairness and other nonmonetary or procedural criteria such as communicativity; and perhaps even altruism. ## 5. So what? Online labor markets first came to prominence in 2010-2014, during the "Great Recession" following the financial crisis of 2007-2010. While the US economy is widely said to have "recovered", many of the jobs lost were higher-paying than the jobs created during the recovery. A disproportionate number of the new jobs were low-wage service jobs in restaurants and hotels. And many of the new jobs created were in the temporary staffing industry. The labor historian Jefferson Cowie summarizes the increasingly popular practice of hiring independent contractors instead of employees like this: > For some workers, being an independent contractor means more flexibility, creativity and control over their work. However, there are many more reluctant independent contractors who want regular jobs but find themselves locked out of the system by employers looking for an easy way to buck their responsibility to their employees. The Fair Labor Standards Act, "the bedrock of modern employment law", was signed into law by President Franklin Roosevelt in 1938. It "outlawed child labor, guaranteed a minimum wage, established the official length of the workweek at 40 hours, and required overtime pay for anything more," encouraging "employers to hire more people rather than work the ones they had to exhaustion." The history of the FLSA, Cowie writes, has been one of expanding coverage -- for example, in 1963 JFK signed the Equal Pay Act, amending the FLSA with the goal of eliminating the gender pay gap -- and increasing the minimum wage, often against fierce opposition. Cowie argues especially that more money should be allocated to enforcing the FLSA given that classifying workers as independent contractors rather than employees often benefits the employer -- who makes the classification decision -- at the worker's expense. While the claim that independent contractors have more flexibility -- and many workers value that flexibility to some extent -- flexibility is in general a greater benefit to employers than to workers. And this flexibility is only empowering in practice for a minority of workers -- typically the highly-skilled and already well-paid. Most workers, in contrast, would rather have stable jobs with predictable incomes. Thus the rhetoric of worker empowerment that has accompanied "flexibilization" is misleading. As Cowie writes, "employers will always have more power than their employees, and [...] it's in their interests to make those employees work as long and as cheaply as possible." The argument that regulation impedes an individual's ability to make their own employment contract with their employer ignores this power differential, which arises partly from an information asymmetry. And, as Cowie points out, this argument is an old one, just as appealing to employers today as it was a century ago, and still just as misleading: > In Roosevelt's day, the courts found most wages and hours legislation unconstitutional based on the doctrine of "liberty of contract." The idea was as simple as it was pernicious: wages and hours legislation violated an individual's freedom to make an independent (read: worse) deal with their employer. Cowie doesn't discuss the role of information technologies in the growth of employee misclassification and other employer practices that appear to skirt existing employment and labor laws. Information technology has not driven these developments, but it has enabled them. In 2010, for example, computer scientist Luis von Ahn, inventor of reCAPTCHA and originator of the term "human computation," wrote in a blog post titled "Work and the internet": > Recently I have heard more than one company saying something like: "We use Mechanical Turk because otherwise we would have to pay people $7/hour to do this task." In other words: "We use Mechanical Turk to get around the minimum wage laws." As wrong as it may sound to some, this is currently ok [i.e., legal]. In the United States, "independent contractors" are typically not covered by minimum wage laws, so while I'm not a lawyer I believe using Mechanical Turk to get around minimum wage is as legal as hiring independent contractors instead of full-time employees. And crowd work intermediaries are well aware of the legal distinction between independent contractors and employees; indeed many of the their business models effectively rely on it. Some of them, including CrowdFlower, are being sued over it. Thus the future of computationally mediated work is tightly bound up, to take the title of Cowie's editorial, with "the future of fair labor" broadly. ## Coda: People are not computers [100 words.] How many of you have seen this website? [Show lolmythesis.com.] These are one-sentence summaries of theses and dissertations. Here's an example. [Show DoD robot, lolmythesis.com/69604118897.] How about this one? [McClintock, lolmythesis.com/104310623725.] This one might be my favorite. [Dinosaur, lolmythesis.com/104311571905.] Ankita helped me come up with one. It's pretty simple: People are not computers. You know, it turns out that people really aren't computers. People are not computers, even if you treat them like computers. And people are not computers, even if you build a system that lets other people treat them like computers. We in "human-centered computing" supposedly know this. It turns out that they are not so-called rational actors either. And those of us who take interpretive social science seriously know that too. My question is, can we build systems with that knowledge? I hope so.