CSR NEWS

csr today - News Alert
_____________________________________________________________

CSR NEWS The Corporate Social Responsibility Network

www.csr-news.net

 

 

 




Facebook’s Study Did No Wrong

It came to light recently that Facebook, in collaboration with some researchers at Cornell University, had conducted a research study on some of its users, manipulating what users saw in their news feeds in order to see if there was an appreciable impact on what those users themselves then posted. Would people who saw happy news then post happy stuff themselves? Or what? Outrage ensued. After all, Facebook had intentionally made (some) people feel (a little) sadder. And they did so without users’ express consent. The study had, in other words, violated two basic rules of ethics.

But I’m not so sure there was anything wrong with Facebook’s little experiment.

Two separate questions arise, here. One has to do with the ethics of the Cornell researchers, and whether Cornell’s ethics board should have been asked to approve the study and whether, in turn, they should have approved it. The other has to do with the ethics of Facebook as a company. But this is a blog about business ethics, so I’ll stick primarily to the question about Facebook. Was it wrong for Facebook to conduct this study?

With regard to Facebook’s conducting this study, two substantive ethical questions must be dealt with. One has to do with risk of harm. The other has to do with consent.

Let’s begin with the question of harm. The amount of harm done per person in this study was clearly trivial, perhaps literally negligible. Under most human-subjects research rules, studies that involve “minimal” risk (roughly: risks comparable to the risks of everyday life) are subject to only minimal review. Some scholars, however, have suggested a category of risk even lower than “minimal,” namely “de minimis” risk, which includes risks that are literally negligible and that hence don’t even require informed consent. This is a controversial proposal, and not all scholars will agree with it. Some will suggest that, even if the risk of harm is truly tiny, respect for human dignity requires that people be offered the opportunity to consent — or to decline to consent — to be part of the study.

So, what about the question of consent? It is a fundamental principle of research ethics that participants (“human subjects”) must consent to participate or to decline to participate, and their decision must be free and well-informed. But that norm was established to protect the interests of human volunteers (as well as paid research subjects). People in both of those categories are, by signing up to participate in a study, engaging in an activity that they would otherwise have no interest in participating in. Having someone shove a needle in your arm to test a cancer drug (or even having someone interview you about your sexual habits) is not something people normally do. We don’t normally have needles stuck in our arms unless we see some benefit for us (e.g., to prevent or cure some illness in ourselves). Research subjects are doing something out of the ordinary — subjecting themselves to some level of risk, just so that others may benefit from the knowledge generated — and so the idea is that they have a strong right to know what they’re getting themselves into. But the users of commercial products — such as Facebook — are in a different situation. They want to experience Facebook (with all its ups and downs), because they see it as bringing them benefits, benefits that outweigh whatever downsides come with the experience. Facebook, all jokes aside, is precisely unlike having an experimental drug injected into your arm.

Now think back, if you will, to the last time Facebook engaged in action that it knew, with a high level of certainty, would make some of its users sad. When was that? It was the last time Facebook engaged in one of its infamous rejiggings of its layout and/or news feed. As any Facebook user knows, these changes happen alarmingly often, and almost never seem to do anything positive in terms of user experience. Every time one of those changes is made (and made, it is worth nothing, for reasons entirely opaque to users), the internet lights up with the bitter comments of millions of Facebook users who wish the company would just leave well enough alone. (This point was also made by a group of bioethicists who pointed out that if Facebook has messed with people’s minds, here, they have done so no more than usual.)

The more general point is this: it is perfectly acceptable for a company to change its services in ways that might make people unhappy, or even in ways that is bound to make at least some of its users unhappy. And in fact Facebook would have never suffered criticism for doing so if it had simply never published the result. But the point here is not just that they could have got away with it if they had kept quiet. The point is that if they hadn’t published, there literally would have been no objection to make. Why, you ask?

If Facebook had simply manipulated users news feeds and kept the results to themselves, this process would likely have fallen under the heading of what is known, in research ethics circles, as “program evaluation.” Program evaluation is, roughly speaking, anything an organization does to gather data on its own activities, with an eye to understanding how well it is doing and how to improve its own workings. If, for example, a university professor like me alters some minor aspect of his course in order to determine whether it affected student happiness (perhaps as reflected in standard course evaluations), that would be just fine. It would be considered program evaluation and hence utterly exempt from the rules governing research ethics. But if that professor were to collect the data and analyze it for publication in a peer-reviewed journal, it would then be called “research” and hence subject to those stricter rules, including review by an independent ethics board. But that’s because publication is the coin of the realm in the publish-or-perish world of academia. In academia, the drive to publish is so strong that — so the worry goes, and it is not an unsubstantiated worry — professors will expose unwitting research subjects to unreasonable risks, in pursuit of the all-important publication. That’s why the standard is higher for academic work that counts as research.

None of this — the fact that Facebook isn’t an academic entity, and that it was arguably conducting something like program evaluation — none of this implies that ethical standards don’t apply. No company has the right to subject people to serious unanticipated risks. But Facebook wasn’t doing that. The risks were small, and well within the range of ‘risks’ (can you even call them that?) experienced by Facebook’s users on a regular basis. This example illustrates nicely why there is a field called “business ethics” (and “research ethics” and “medical ethics,” and so on). While ethics is essential to the conduct of business, there’s no particular reason to think that ethics in business must be exactly the same as ethics in other realms. And the behaviour of Facebook in this case was entirely consistent with the demands of business ethics.


    
 


A true leader would rename the Washington R*dskins right away

A leader has to be able to do hard things, including, perhaps especially, leading his or her organization through difficult changes. Indeed, many leadership scholars regard that as the key difference between the science of managing and the art of leading. Lots of people may be able to manage an organization competently in pursuit of well-established goals. Fewer can lead an organization when hard changes need to be made. And in the case of Daniel Snyder, the owner of a certain football team whose home base is Washington, DC, one of those hard changes should be to get on with it and change his team’s name.

Snyder has faced a groundswell of criticism over his team’s continued use of the “R*dskins” moniker. There have been vows to boycott the team and its paraphernalia. A growing list of media outlets have even vowed no longer to use the team’s current name in their coverage of the team. There’s even a Wikipedia page detailing the ethical debate over what many take to be an offensive, even racist name.

And if Snyder is going to change the team’s name (something he’s given no indication he is inclined to do), it needn’t be just because he’s worried about offending people. Two professors from Emory University have argued that there’s a good business argument for changing the team’s name. In particular, their analysis suggests that the name is bad for brand equity. “Elementary principles of brand management,” they state, “suggest dropping the team name.”

The U.S. Patent and Trademark Office has even entered the fray by canceling the team’s trademark registration. The PTO has rules, it seems, against trademarking racial slurs. This doesn’t mean that the team has to change its name, but it surely helps to devalue the brand and promises to reduce income from merchandising.

The whole sorry mess has the feeling of inevitability about it. The name can’t stay forever. The tide of history—and sound ethical reasoning—is against Snyder on this one. Snyder is an employer, most of whose employees are members of a historically-disadvantaged group. It is unseemly at best to resist so adamantly the pleas of members of another historically-disadvantaged group that he stop making money from a brand that adds insult to injury.

It is time for Daniel Snyder to act like a leader, to do the hard thing—the honourable thing—and change that name.


    
 


A true leader would rename the Washington R*dskins right away

A leader has to be able to do hard things, including, perhaps especially, leading his or her organization through difficult changes. Indeed, many leadership scholars regard that as the key difference between the science of managing and the art of leading. Lots of people may be able to manage an organization competently in pursuit of well-established goals. Fewer can lead an organization when hard changes need to be made. And in the case of Daniel Snyder, the owner of a certain football team whose home base is Washington, DC, one of those hard changes should be to get on with it and change his team’s name.

Snyder has faced a groundswell of criticism over his team’s continued use of the “R*dskins” moniker. There have been vows to boycott the team and its paraphernalia. A growing list of media outlets have even vowed no longer to use the team’s current name in their coverage of the team. There’s even a Wikipedia page detailing the ethical debate over what many take to be an offensive, even racist name.

And if Snyder is going to change the team’s name (something he’s given no indication he is inclined to do), it needn’t be just because he’s worried about offending people. Two professors from Emory University have argued that there’s a good business argument for changing the team’s name. In particular, their analysis suggests that the name is bad for brand equity. “Elementary principles of brand management,” they state, “suggest dropping the team name.”

The U.S. Patent and Trademark Office has even entered the fray by canceling the team’s trademark registration. The PTO has rules, it seems, against trademarking racial slurs. This doesn’t mean that the team has to change its name, but it surely helps to devalue the brand and promises to reduce income from merchandising.

The whole sorry mess has the feeling of inevitability about it. The name can’t stay forever. The tide of history—and sound ethical reasoning—is against Snyder on this one. Snyder is an employer, most of whose employees are members of a historically-disadvantaged group. It is unseemly at best to resist so adamantly the pleas of members of another historically-disadvantaged group that he stop making money from a brand that adds insult to injury.

It is time for Daniel Snyder to act like a leader, to do the hard thing—the honourable thing—and change that name.


    
 


Disrupting management ideas

Over the last days we have seen a captivating debateunfolding. Jill Lepore’s article in The New Yorker on the concept of ‘disruptive innovation’ has garnered quite some attention. Not at least from its progenitor, Lepore’s Harvard colleague Clayton Christenen, who appears to be anything but amused.

Disruptive innovations - put simply - are new products or services that create new markets, while at the same time turning existing solutions to customer demands obsolete, and thus destroying existing markets and the companies that serve them. In his many books, Christensen initially developed the idea from a corporate context (such as his floppy disk, steel, or construction equipment examples) but it quickly branched out into other sectors.

The article is a fascinating read not just because it takes on an idea largely uncontested in academia and beyond. Moreover, the concept of ‘disruptive innovation’ had quite a substantial impact on the real world. Lepore writes as a historian and delineates the superficial and ideological nature of the idea. The piece is also worthwhile reading as it exposes Christensen’s ‘case study’ approach (after all, a hallmark of its intellectual birthplace) to thorough historical analysis. The latter perspective debunks and exposes the data at the heart of Christensen’s ‘disruption’ theory as utterly wanting.

Now it is always fun to question conventional wisdom and powerful ideas, especially when they come from a Harvard Business School professor recently honored as the No 1 in the Top50 Thinkers ranking. As some of our readers might remember, we also enjoyed doing a similar job on his colleague Michael Porter’s ‘big idea’ on Creating Shared Value earlier this year. But there is the danger that those skirmishes just remain internal quibbles inside the ivory tower of which another former Harvard colleague, Henry Kissinger, once said that they ‘are so vicious because there is so little at stake’…

Lepore’s article clearly goes beyond that. Two things seem worth highlighting. First, she contextualizes a management theory in a wider intellectual historical context, and second, she shows that as such management ideas are deeply ideological constructs:
"Beginning in the eighteenth century, as the intellectual historian Dorothy Ross once pointed out, theories of history became secular; then they started something new—historicism, the idea “that all events in historical time can be explained by prior events in historical time.” Things began looking up. First, there was that, then there was this, and this is better than that. The eighteenth century embraced the idea of progress; the nineteenth century had evolution; the twentieth century had growth and then innovation. Our era has disruption, which, despite its futurism, is atavistic. It’s a theory of history founded on a profound anxiety about financial collapse, an apocalyptic fear of global devastation, and shaky evidence. […] 
The idea of progress—the notion that human history is the history of human betterment—dominated the world view of the West between the Enlightenment and the First World War. It had critics from the start, and, in the last century, even people who cherish the idea of progress, and point to improvements like the eradication of contagious diseases and the education of girls, have been hard-pressed to hold on to it while reckoning with two World Wars, the Holocaust and Hiroshima, genocide and global warming. Replacing “progress” with “innovation” skirts the question of whether a novelty is an improvement: the world may not be getting better and better but our devices are getting newer and newer. […] 
The idea of innovation is the idea of progress stripped of the aspirations of the Enlightenment, scrubbed clean of the horrors of the twentieth century, and relieved of its critics. Disruptive innovation goes further, holding out the hope of salvation against the very damnation it describes: disrupt, and you will be saved."
Disruptive innovation in its reception in business, academia, public administration and politics had some rather devastating (side-)effects – as Lepore eloquently points out. The crucial lesson of her essay though lies in its unmasking of what sounds like a rather technocratic ‘theory’ as something that is deeply informed by a particular view of the world, by a particular normative take on how humans historically have evolved.
As the article points out, such functionalist and technocratic ‘theories’ totally ignore other dimensions of human life. ‘Disrupting’ – sold as a good thing and the natural way of how organizations evolve - ignores other important dimensions of human development, especially if the concept gets branched out and expedited beyond business to schools, hospitals, prisons, museums etc. The ethical implications of such a theory are totally ignored in Christensen’ framework – argues Lepore.

One central lesson of this article for everyone concerned with the role of business in contemporary society – be it academics, executives or politicians – points to the pivotal role of understanding the intellectual heritage and presuppositions of those core theories and ideas that have shaped contemporary social (incl. business) reality. In that sense, Lepore’s piece is a truly ‘critical’ contribution to management – and the set of historical ‘criteria’ by which she does the job should encourage particular management academics to move beyond the confines of their discipline. To understand the power of ideas we have to look at the broader picture of their origin, their contemporary drivers, but also their wider implications for society.


Photos (top by Andy Kaufman; middle by Nicolas Nova) reproduced under the Creative Commons license.
    
 


Disrupting management ideas

Over the last days we have seen a captivating debateunfolding. Jill Lepore’s article in The New Yorker on the concept of ‘disruptive innovation’ has garnered quite some attention. Not at least from its progenitor, Lepore’s Harvard colleague Clayton Christenen, who appears to be anything but amused.

Disruptive innovations - put simply - are new products or services that create new markets, while at the same time turning existing solutions to customer demands obsolete, and thus destroying existing markets and the companies that serve them. In his many books, Christensen initially developed the idea from a corporate context (such as his floppy disk, steel, or construction equipment examples) but it quickly branched out into other sectors.

The article is a fascinating read not just because it takes on an idea largely uncontested in academia and beyond. Moreover, the concept of ‘disruptive innovation’ had quite a substantial impact on the real world. Lepore writes as a historian and delineates the superficial and ideological nature of the idea. The piece is also worthwhile reading as it exposes Christensen’s ‘case study’ approach (after all, a hallmark of its intellectual birthplace) to thorough historical analysis. The latter perspective debunks and exposes the data at the heart of Christensen’s ‘disruption’ theory as utterly wanting.

Now it is always fun to question conventional wisdom and powerful ideas, especially when they come from a Harvard Business School professor recently honored as the No 1 in the Top50 Thinkers ranking. As some of our readers might remember, we also enjoyed doing a similar job on his colleague Michael Porter’s ‘big idea’ on Creating Shared Value earlier this year. But there is the danger that those skirmishes just remain internal quibbles inside the ivory tower of which another former Harvard colleague, Henry Kissinger, once said that they ‘are so vicious because there is so little at stake’…

Lepore’s article clearly goes beyond that. Two things seem worth highlighting. First, she contextualizes a management theory in a wider intellectual historical context, and second, she shows that as such management ideas are deeply ideological constructs:
"Beginning in the eighteenth century, as the intellectual historian Dorothy Ross once pointed out, theories of history became secular; then they started something new—historicism, the idea “that all events in historical time can be explained by prior events in historical time.” Things began looking up. First, there was that, then there was this, and this is better than that. The eighteenth century embraced the idea of progress; the nineteenth century had evolution; the twentieth century had growth and then innovation. Our era has disruption, which, despite its futurism, is atavistic. It’s a theory of history founded on a profound anxiety about financial collapse, an apocalyptic fear of global devastation, and shaky evidence. […] 
The idea of progress—the notion that human history is the history of human betterment—dominated the world view of the West between the Enlightenment and the First World War. It had critics from the start, and, in the last century, even people who cherish the idea of progress, and point to improvements like the eradication of contagious diseases and the education of girls, have been hard-pressed to hold on to it while reckoning with two World Wars, the Holocaust and Hiroshima, genocide and global warming. Replacing “progress” with “innovation” skirts the question of whether a novelty is an improvement: the world may not be getting better and better but our devices are getting newer and newer. […] 
The idea of innovation is the idea of progress stripped of the aspirations of the Enlightenment, scrubbed clean of the horrors of the twentieth century, and relieved of its critics. Disruptive innovation goes further, holding out the hope of salvation against the very damnation it describes: disrupt, and you will be saved."
Disruptive innovation in its reception in business, academia, public administration and politics had some rather devastating (side-)effects – as Lepore eloquently points out. The crucial lesson of her essay though lies in its unmasking of what sounds like a rather technocratic ‘theory’ as something that is deeply informed by a particular view of the world, by a particular normative take on how humans historically have evolved.
As the article points out, such functionalist and technocratic ‘theories’ totally ignore other dimensions of human life. ‘Disrupting’ – sold as a good thing and the natural way of how organizations evolve - ignores other important dimensions of human development, especially if the concept gets branched out and expedited beyond business to schools, hospitals, prisons, museums etc. The ethical implications of such a theory are totally ignored in Christensen’ framework – argues Lepore.

One central lesson of this article for everyone concerned with the role of business in contemporary society – be it academics, executives or politicians – points to the pivotal role of understanding the intellectual heritage and presuppositions of those core theories and ideas that have shaped contemporary social (incl. business) reality. In that sense, Lepore’s piece is a truly ‘critical’ contribution to management – and the set of historical ‘criteria’ by which she does the job should encourage particular management academics to move beyond the confines of their discipline. To understand the power of ideas we have to look at the broader picture of their origin, their contemporary drivers, but also their wider implications for society.


Photos (top by Andy Kaufman; middle by Nicolas Nova) reproduced under the Creative Commons license.
    
 


More Recent Articles

 

 

 

Imprint and Contact:

CSR NEWS GmbH | Unterscheideweg 13 | D-42499 Hueckeswagen | Germany
http://www.csr-news.net

Managing Editor: Achim Halfmann (V.i.S.d.P.)
Scientific Director: Dr. Thomas Beschorner

Registration: Amtsgericht Koeln, Germany - HRB 60902

Email: redaktion@csr-news.net
Phone: +49 (0)2192 8770000



 

 


Click here to safely unsubscribe from "CSR NEWS | csr-news.net » and +english." Click here to view mailing archives, here to change your preferences, or here to subscribePrivacy