You are here

Redes Sociales

Discover Bots and Businesses in Messenger

Facebook - Mié, 06/28/2017 - 18:00

by Yingming Chen, Engineer, Messenger

We first announced our new Discover section at F8 to increase the opportunity for people to find the amazing experiences developers have built for people and businesses to interact with on Messenger. Since that announcement, we’ve been working to make it more intuitive and relevant for you. Today we’re excited to announce v1.1 of Discover which enables people to browse and find bots and businesses in Messenger, starting to roll out today for people in the U.S.

Here’s how Discover works – when you tap on the Discover icon on the lower right hand corner of the Messenger Home screen, you can browse by category, recently visited businesses and featured experiences. Discover makes it even easier to get things done, from reading the latest articles, booking your next vacation, or getting the latest sport highlights, right in Messenger. In addition to this full roll out to U.S. consumers, we’ve also updated the units that appear in Discover, showcasing the many resources you have to interact with businesses, get your questions answered and find the information you want.

Here’s what you’ll find in Discover:

Recently Used: Shows you the bots and businesses you recently interacted with.

Featured: A representation of the full range of experiences available in Messenger. Helps people find bots and businesses to explore.

Categories: Bots and businesses organized by topic. Refreshed frequently so you can find new experiences.

Our goal with Discover is to ensure that experiences in Messenger are compelling, high quality and easy to find. This latest update makes it even more intuitive for people to find what they care about most. And be sure to keep coming back – new experiences are always added!

For developers and businesses interested in getting their experiences added to the Discover section, please go here.

Categorías: Redes Sociales

Two Billion People Coming Together on Facebook

Facebook - Mar, 06/27/2017 - 19:08

By Mike Nowak, Product Director, and Guillermo Spiller, Product Manager

As Mark Zuckerberg announced today, we reached a new milestone: there are now 2 billion people connecting and building communities on Facebook every month.

This wouldn’t have happened without the millions of smaller communities and individuals who are sharing and making meaningful contributions every day. Each day, more than 175 million people share a Love reaction, and on average, over 800 million people like something on Facebook. More than 1 billion people use Groups every month.

To show our appreciation for the many ways people support one another on Facebook, we will share several personalized experiences over the coming days.

Good Adds Up Video

We are launching a personalized video to celebrate bringing the world closer together. You may see your video in your News Feed or by visiting facebook.com/goodaddsup.

Celebrating the Good People Do

After someone reacts to a friend’s post with Love, wishes someone happy birthday or creates a group, they will see a message in News Feed thanking them.

Sharing Community Stories and Impact

On facebook.com/goodaddsup, we are featuring fun facts about how people are contributing to the community. In the US, we are also sharing stories of people who inspire us. Every day, people connect with one another, contribute to their local communities and help make the world a better place.

We want to help do our part as well. As Mark mentioned last week at the Facebook Communities Summit, our mission is to bring the world closer together. Reaching this milestone is just one small step toward that goal. We are excited to continue to build products that allow people to connect with one another, regardless of where they live or what language they speak.

Thank you for being part of our global community!

Categorías: Redes Sociales

Hard Questions: Hate Speech

Facebook - Mar, 06/27/2017 - 14:00

Who should decide what is hate speech in an online global community?
By Richard Allan, VP EMEA Public Policy

As more and more communication takes place in digital form, the full range of public conversations are moving online — in groups and broadcasts, in text and video, even with emoji. These discussions reflect the diversity of human experience: some are enlightening and informative, others are humorous and entertaining, and others still are political or religious. Some can also be hateful and ugly. Most responsible communications platforms and systems are now working hard to restrict this kind of hateful content.

Facebook is no exception. We are an open platform for all ideas, a place where we want to encourage self-expression, connection and sharing. At the same time, when people come to Facebook, we always want them to feel welcome and safe. That’s why we have rules against bullying, harassing and threatening someone.

But what happens when someone expresses a hateful idea online without naming a specific person? A post that calls all people of a certain race “violent animals” or describes people of a certain sexual orientation as “disgusting” can feel very personal and, depending on someone’s experiences, could even feel dangerous. In many countries around the world, those kinds of attacks are known as hate speech. We are opposed to hate speech in all its forms, and don’t allow it on our platform.

In this post we want to explain how we define hate speech and approach removing it — as well as some of the complexities that arise when it comes to setting limits on speech at a global scale, in dozens of languages, across many cultures. Our approach, like those of other platforms, has evolved over time and continues to change as we learn from our community, from experts in the field, and as technology provides us new tools to operate more quickly, more accurately and precisely at scale.

Defining Hate Speech

The first challenge in stopping hate speech is defining its boundaries.

People come to Facebook to share their experiences and opinions, and topics like gender, nationality, ethnicity and other personal characteristics are often a part of that discussion. People might disagree about the wisdom of a country’s foreign policy or the morality of certain religious teachings, and we want them to be able to debate those issues on Facebook. But when does something cross the line into hate speech?

Our current definition of hate speech is anything that directly attacks people based on what are known as their “protected characteristics” — race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender, gender identity, or serious disability or disease.

There is no universally accepted answer for when something crosses the line. Although a number of countries have laws against hate speech, their definitions of it vary significantly.

In Germany, for example, laws forbid incitement to hatred; you could find yourself the subject of a police raid if you post such content online. In the US, on the other hand, even the most vile kinds of speech are legally protected under the US Constitution.

People who live in the same country — or next door — often have different levels of tolerance for speech about protected characteristics. To some, crude humor about a religious leader can be considered both blasphemy and hate speech against all followers of that faith. To others, a battle of gender-based insults may be a mutually enjoyable way of sharing a laugh. Is it OK for a person to post negative things about people of a certain nationality as long as they share that same nationality? What if a young person who refers to an ethnic group using a racial slur is quoting from lyrics of a song?

There is very important academic work in this area that we follow closely. Timothy Garton Ash, for example, has created the Free Speech Debate to look at these issues on a cross-cultural basis. Susan Benesch established the Dangerous Speech Project, which investigates the connection between speech and violence. These projects show how much work is left to be done in defining the boundaries of speech online, which is why we’ll keep participating in this work to help inform our policies at Facebook.

Enforcement

We’re committed to removing hate speech any time we become aware of it. Over the last two months, on average, we deleted around 66,000 posts reported as hate speech per week — that’s around 288,000 posts a month globally. (This includes posts that may have been reported for hate speech but deleted for other reasons, although it doesn’t include posts reported for other reasons but deleted for hate speech.*)

But it’s clear we’re not perfect when it comes to enforcing our policy. Often there are close calls — and too often we get it wrong.

Sometimes, it’s obvious that something is hate speech and should be removed – because it includes the direct incitement of violence against protected characteristics, or degrades or dehumanizes people. If we identify credible threats of imminent violence against anyone, including threats based on a protected characteristic, we also escalate that to local law enforcement.

But sometimes, there isn’t a clear consensus — because the words themselves are ambiguous, the intent behind them is unknown or the context around them is unclear. Language also continues to evolve, and a word that was not a slur yesterday may become one today.

Here are some of the things we take into consideration when deciding what to leave on the site and what to remove.

Context

What does the statement “burn flags not fags” mean? While this is clearly a provocative statement on its face, should it be considered hate speech? For example, is it an attack on gay people, or an attempt to “reclaim” the slur? Is it an incitement of political protest through flag burning? Or, if the speaker or audience is British, is it an effort to discourage people from smoking cigarettes (fag being a common British term for cigarette)? To know whether it’s a hate speech violation, more context is needed.

Often the most difficult edge cases involve language that seems designed to provoke strong feelings, making the discussion even more heated — and a dispassionate look at the context (like country of speaker or audience) more important. Regional and linguistic context is often critical, as is the need to take geopolitical events into account. In Myanmar, for example, the word “kalar” has benign historic roots, and is still used innocuously across many related Burmese words. The term can however also be used as an inflammatory slur, including as an attack by Buddhist nationalists against Muslims. We looked at the way the word’s use was evolving, and decided our policy should be to remove it as hate speech when used to attack a person or group, but not in the other harmless use cases. We’ve had trouble enforcing this policy correctly recently, mainly due to the challenges of understanding the context; after further examination, we’ve been able to get it right. But we expect this to be a long-term challenge.

In Russia and Ukraine, we faced a similar issue around the use of slang words the two groups have long used to describe each other. Ukrainians call Russians “moskal,” literally “Muscovites,” and Russians call Ukrainians “khokhol,” literally “topknot.” After conflict started in the region in 2014, people in both countries started to report the words used by the other side as hate speech. We did an internal review and concluded that they were right. We began taking both terms down, a decision that was initially unpopular on both sides because it seemed restrictive, but in the context of the conflict felt important to us.

Often a policy debate becomes a debate over hate speech, as two sides adopt inflammatory language. This is often the case with the immigration debate, whether it’s about the Rohingya in South East Asia, the refugee influx in Europe or immigration in the US. This presents a unique dilemma: on the one hand, we don’t want to stifle important policy conversations about how countries decide who can and can’t cross their borders. At the same time, we know that the discussion is often hurtful and insulting.

When the influx of migrants arriving in Germany increased in recent years, we received feedback that some posts on Facebook were directly threatening refugees or migrants. We investigated how this material appeared globally and decided to develop new guidelines to remove calls for violence against migrants or dehumanizing references to them — such as comparisons to animals, to filth or to trash. But we have left in place the ability for people to express their views on immigration itself. And we are deeply committed to making sure Facebook remains a place for legitimate debate.

Intent

People’s posts on Facebook exist in the larger context of their social relationships with friends. When a post is flagged for violating our policies on hate speech, we don’t have that context, so we can only judge it based on the specific text or images shared. But the context can indicate a person’s intent, which can come into play when something is reported as hate speech.

There are times someone might share something that would otherwise be considered hate speech but for non-hateful reasons, such as making a self-deprecating joke or quoting lyrics from a song. People often use satire and comedy to make a point about hate speech.

Or they speak out against hatred by condemning someone else’s use of offensive language, which requires repeating the original offense. This is something we allow, even though it might seem questionable since it means some people may encounter material disturbing to them. But it also gives our community the chance to speak out against hateful ideas. We revised our Community Standards to encourage people to make it clear when they’re sharing something to condemn it, but sometimes their intent isn’t clear, and anti-hatred posts get removed in error.

On other occasions, people may reclaim offensive terms that were used to attack them. When someone uses an offensive term in a self-referential way, it can feel very different from when the same term is used to attack them. For example, the use of the word “dyke” may be considered hate speech when directed as an attack on someone on the basis of the fact that they are gay. However, if someone posted a photo of themselves with #dyke, it would be allowed. Another example is the word “faggot.” This word could be considered hate speech when directed at a person, but, in Italy, among other places, “frocio” (“faggot”) is used by LGBT activists to denounce homophobia and reclaim the word. In these cases, removing the content would mean restricting someone’s ability to express themselves on Facebook.

Mistakes

If we fail to remove content that you report because you think it is hate speech, it feels like we’re not living up to the values in our Community Standards. When we remove something you posted and believe is a reasonable political view, it can feel like censorship. We know how strongly people feel when we make such mistakes, and we’re constantly working to improve our processes and explain things more fully.

Our mistakes have caused a great deal of concern in a number of communities, including among groups who feel we act — or fail to act — out of bias. We are deeply committed to addressing and confronting bias anywhere it may exist. At the same time, we work to fix our mistakes quickly when they happen.

Last year, Shaun King, a prominent African-American activist, posted hate mail he had received that included vulgar slurs. We took down Mr. King’s post in error — not recognizing at first that it was shared to condemn the attack. When we were alerted to the mistake, we restored the post and apologized. Still, we know that these kinds of mistakes are deeply upsetting for the people involved and cut against the grain of everything we are trying to achieve at Facebook.

Continuing To Improve

People often ask: can’t artificial intelligence solve this? Technology will continue to be an important part of how we try to improve. We are, for example, experimenting with ways to filter the most obviously toxic language in comments so they are hidden from posts. But while we’re continuing to invest in these promising advances, we’re a long way from being able to rely on machine learning and AI to handle the complexity involved in assessing hate speech.

That’s why we rely so heavily on our community to identify and report potential hate speech. With billions of posts on our platform — and with the need for context in order to assess the meaning and intent of reported posts — there’s not yet a perfect tool or system that can reliably find and distinguish posts that cross the line from expressive opinion into unacceptable hate speech. Our model builds on the eyes and ears of everyone on platform — the people who vigilantly report millions of posts to us each week for all sorts of potential violations. We then have our teams of reviewers, who have broad language expertise and work 24 hours a day across time zones, to apply our hate speech policies.

We’re building up these teams that deal with reported content: over the next year, we’ll add 3,000 people to our community operations team around the world, on top of the 4,500 we have today. We’ll keep learning more about local context and changing language. And, because measurement and reporting are an important part of our response to hate speech, we’re working on better ways to capture and share meaningful data with the public.

Managing a global community in this manner has never been done before, and we know we have a lot more work to do. We are committed to improving — not just when it comes to individual posts, but how we approach discussing and explaining our choices and policies entirely.

Read more about our new blog series Hard Questions. We want your input on what other topics we should address — and what we could be doing better. Please send suggestions to hardquestions@fb.com.

*What’s in the numbers:

  • These numbers represent an average from April and May 2017.
  • These numbers reflect content that was reported for hate speech and subsequently deleted, whatever the reason.
  • The numbers are specific to reports on individual posts on Facebook.
    • These numbers do not include hate speech deleted from Instagram.
    • These numbers do not include hate speech that was deleted because an entire page, group or profile was taken down or disabled. This means we could be drastically undercounting because a hateful group may contain many individual items of hate speech.
    • These numbers do not include hate speech that was reported for other reasons.
      • For example, outrageous statements can be used to get people to click on spam links and with our current definitions if this was reported for spam we do not track it as hate speech.
      • For example, if a post was reported for nudity or bullying, but deleted for hate speech, it would not be counted in these numbers.
    • These numbers might include content that was reported for hate, but deleted for other reasons.
      • For example, if a post was reported for hate speech, but deleted for nudity or bullying, it would be counted in these numbers.
    • These numbers also contain instances when we may have taken down content mistakenly.
  • The numbers vary dramatically over time due to offline events (like the aftermath of a terror attack) or online events (like a spam attack).
  • We are exploring a better process by which to log our reports and removals, for more meaningful and accurate data.
Categorías: Redes Sociales

Facebook, Microsoft, Twitter and YouTube Announce Formation of the Global Internet Forum to Counter Terrorism

Facebook - Lun, 06/26/2017 - 19:30

Today, Facebook, Microsoft, Twitter and YouTube are announcing the formation of the Global Internet Forum to Counter Terrorism, which will help us continue to make our hosted consumer services hostile to terrorists and violent extremists.

The spread of terrorism and violent extremism is a pressing global problem and a critical challenge for us all. We take these issues very seriously, and each of our companies have developed policies and removal practices that enable us to take a hard line against terrorist or violent extremist content on our hosted consumer services. We believe that by working together, sharing the best technological and operational elements of our individual efforts, we can have a greater impact on the threat of terrorist content online.

The new forum builds on initiatives including the EU Internet Forum and the Shared Industry Hash Database; discussions with the UK and other governments; and the conclusions of the recent G7 and European Council meetings. It will formalize and structure existing and future areas of collaboration between our companies and foster cooperation with smaller tech companies, civil society groups and academics, governments and supra-national bodies such as the EU and the UN.  

The scope of our work will evolve over time as we will need to be responsive to the ever-evolving terrorist and extremist tactics. Initially, however, our work will focus on:  

  1. Technological solutions: Our companies will work together to refine and improve existing joint technical work, such as the Shared Industry Hash Database; exchange best practices as we develop and implement new content detection and classification techniques using machine learning; and define standard transparency reporting methods for terrorist content removals.
  2. Research: We will commission research to inform our counter-speech efforts and guide future technical and policy decisions around the removal of terrorist content.
  3. Knowledge-sharing: We will work with counter-terrorism experts including governments, civil society groups, academics and other companies to engage in shared learning about terrorism. And through a joint partnership with the UN Security Council Counter-Terrorism Executive Directorate (UN CTED) and the ICT4Peace Initiative, we are establishing a broad knowledge-sharing network to:
    • Engage with smaller companies: We will help them develop the technology and processes necessary to tackle terrorist and extremist content online.
    • Develop best practices: We already partner with organizations such as the Center for Strategic and International Studies, Anti-Defamation League and Global Network Initiative to identify how best to counter extremism and online hate, while respecting freedom of expression and privacy. We can socialize these best practices, and develop additional shared learnings on topics such as community guideline development, and policy enforcement.
    • Counter-speech: Each of us already has robust counter-speech initiatives in place (e.g., YouTube’s Creators for Change, Jigsaw’s Redirect Method, Facebook’s P2P and OCCI, Microsoft’s partnership with the Institute for Strategic Dialogue for counter-narratives on Bing, Twitter’s global NGO training program). The forum we have established allows us to learn from and contribute to one another’s counter-speech efforts, and discuss how to further empower and train civil society organizations and individuals who may be engaged in similar work and support ongoing efforts such as the Civil society empowerment project (CSEP).

We will be hosting a series of learning workshops in partnership with UN CTED/ICT4Peace in Silicon Valley and around the world to drive these areas of collaboration.

Further information on all of the above initiatives will be shared in due course.

 

Categorías: Redes Sociales

Our First Communities Summit and New Tools For Group Admins

Facebook - Jue, 06/22/2017 - 18:12

By Kang-Xing Jin, VP, Engineering

Today we hosted our first-ever Facebook Communities Summit in Chicago with hundreds of group admins where we announced new features to support their communities on Facebook.

Mark Zuckerberg kicked off by celebrating the role Groups play in the Facebook community and thanking the group admins who lead them. He also announced a new mission for Facebook that will guide our work over the next decade: Give people the power to build community and bring the world closer together.

An important part of delivering on our new mission is supporting group admins, who are real community leaders on Facebook. We’re adding several new features to help them grow and manage their groups:

  • Group Insights: group admins have told us consistently that having a better understanding of what’s going on in their groups would help them make decisions on how to best support their members. Now, with Group Insights, they’ll be able to see real-time metrics around growth, engagement and membership — such as the number of posts and times that members are most engaged.
  • Membership request filtering: we also hear from admins that admitting new members is one of the most time-consuming things they do. So, we added a way for them to sort and filter membership requests on common categories like gender and location, and then accept or decline all at once.
  • Removed member clean-up: to help keep their communities safe from bad actors, group admins can now remove a person and the content they’ve created within the group, including posts, comments and other people added to the group, in one step.
  • Scheduled posts: group admins and moderators can create and conveniently schedule posts on a specific day and time.
  • Group to group linking: we’re beginning to test group-to-group linking, which allows group admins to recommend similar or related groups to their members. This is just the beginning of ways that we’re helping bring communities and sub-communities closer together.

More than 1 billion people around the world use Groups, and more than 100 million people are members of “meaningful groups.” These are groups that quickly become the most important part of someone’s experience on Facebook. Today we’re setting a goal to help 1 billion people join meaningful communities like these.

In Chicago, we celebrated some of these groups built around local neighborhoods, shared passions and life experiences. For example, some of the groups and admins that attended include:

  • Terri Hendricks, who started Lady Bikers of California so that women who ride motorcycles could connect with each other, meet in real life through group rides, and offer each other both motorcycle-related and personal support. Terri says that when she started riding motorcycles it was rare to see other women who rode and that across the group, there is “nothing that these ladies wouldn’t do for each other.”
  • Matthew Mendoza, who started Affected by Addiction Support Group. The group is a safe space for people who are experiencing or recovering from drug and alcohol addiction, as well as their friends and family, to offer support and share stories.
  • Kenneth Goodwin, minister of Bethel Church in Decatur, Georgia, who uses the Bethel Original Free Will Baptist Church group to post announcements to the local community about everything happening at Bethel. He and the other admins will often share information about events, meeting times for their small group ministries, and live videos of sermons so people who cannot attend can watch from their homes.

We’re inspired by these stories and the hundreds of others we’ve heard from people attending today’s event. We’re planning more events to bring together group admins outside the US and look forward to sharing more details soon.

Categorías: Redes Sociales

Giving People More Control Over Their Facebook Profile Picture

Facebook - Jue, 06/22/2017 - 04:00

By Aarati Soman, Product Manager

Part of our goal in building global community is understanding the needs of people who use Facebook in specific countries and how we can better serve them. In India, we’ve heard that people want more control over their profile pictures, and we’ve been working over the past year to understand how we can help.

Today, we are piloting new tools that give people in India more control over who can download and share their profile pictures. In addition, we’re exploring ways people can more easily add designs to profile pictures, which our research has shown helpful in deterring misuse. Based on what we learn from our experience in India, we hope to expand to other countries soon.

Profile pictures are an important part of building community on Facebook because they help people find friends and create meaningful connections. But not everyone feels safe adding a profile picture. In our research with people and safety organizations in India, we’ve heard that some women choose not to share profile pictures that include their faces anywhere on the internet because they’re concerned about what may happen to their photos.

These tools, developed in partnership with Indian safety organizations like Centre for Social Research, Learning Links Foundation, Breakthrough and Youth Ki Awaaz, are designed to give people more control over their experience and help keep them safe online.

New Controls

People in India will start seeing a step-by-step guide to add an optional profile picture guard. When you add this guard:

  • Other people will no longer be able to download, share or send your profile picture in a message on Facebook
  • People you’re not friends with on Facebook won’t be able to tag anyone, including themselves, in your profile picture
  • Where possible, we’ll prevent others from taking a screenshot of your profile picture on Facebook, which is currently available only on Android devices
  • We’ll display a blue border and shield around your profile picture as a visual cue of protection

Deterring Misuse

Based on preliminary tests, we’ve learned that when someone adds an extra design layer to their profile picture, other people are at least 75% less likely to copy that picture.

We partnered with Jessica Singh, an illustrator who took inspiration from traditional Indian textile designs such as bandhani and kantha, to create designs for people to add to their profile picture.

If someone suspects that a picture marked with one of these designs is being misused, they can report it to Facebook and we will use the design to help determine whether it should be removed from our community.

Categorías: Redes Sociales

Hard Questions: How We Counter Terrorism

Facebook - Jue, 06/15/2017 - 19:00

By Monika Bickert, Director of Global Policy Management, and Brian Fishman, Counterterrorism Policy Manager

In the wake of recent terror attacks, people have questioned the role of tech companies in fighting terrorism online. We want to answer those questions head on. We agree with those who say that social media should not be a place where terrorists have a voice. We want to be very clear how seriously we take this — keeping our community safe on Facebook is critical to our mission.

In this post, we’ll walk through some of our behind-the-scenes work, including how we use artificial intelligence to keep terrorist content off Facebook, something we have not talked about publicly before. We will also discuss the people who work on counterterrorism, some of whom have spent their entire careers combating terrorism, and the ways we collaborate with partners outside our company.

Our stance is simple: There’s no place on Facebook for terrorism. We remove terrorists and posts that support terrorism whenever we become aware of them. When we receive reports of potential terrorism posts, we review those reports urgently and with scrutiny. And in the rare cases when we uncover evidence of imminent harm, we promptly inform authorities. Although academic research finds that the radicalization of members of groups like ISIS and Al Qaeda primarily occurs offline, we know that the internet does play a role — and we don’t want Facebook to be used for any terrorist activity whatsoever.

We believe technology, and Facebook, can be part of the solution.

We’ve been cautious, in part because we don’t want to suggest there is any easy technical fix. It is an enormous challenge to keep people safe on a platform used by nearly 2 billion every month, posting and commenting in more than 80 languages in every corner of the globe. And there is much more for us to do. But we do want to share what we are working on and hear your feedback so we can do better.

Artificial Intelligence

We want to find terrorist content immediately, before people in our community have seen it. Already, the majority of accounts we remove for terrorism we find ourselves. But we know we can do better at using technology — and specifically artificial intelligence — to stop the spread of terrorist content on Facebook. Although our use of AI against terrorism is fairly recent, it’s already changing the ways we keep potential terrorist propaganda and accounts off Facebook. We are currently focusing our most cutting edge techniques to combat terrorist content about ISIS, Al Qaeda and their affiliates, and we expect to expand to other terrorist organizations in due course. We are constantly updating our technical solutions, but here are some of our current efforts.

  • Image matching: When someone tries to upload a terrorist photo or video, our systems look for whether the image matches a known terrorism photo or video. This means that if we previously removed a propaganda video from ISIS, we can work to prevent other accounts from uploading the same video to our site. In many cases, this means that terrorist content intended for upload to Facebook simply never reaches the platform.
  • Language understanding: We have also recently started to experiment with using AI to understand text that might be advocating for terrorism. We’re currently experimenting with analyzing text that we’ve already removed for praising or supporting terrorist organizations such as ISIS and Al Qaeda so we can develop text-based signals that such content may be terrorist propaganda. That analysis goes into an algorithm that is in the early stages of learning how to detect similar posts. The machine learning algorithms work on a feedback loop and get better over time.
  • Removing terrorist clusters: We know from studies of terrorists that they tend to radicalize and operate in clusters. This offline trend is reflected online as well. So when we identify Pages, groups, posts or profiles as supporting terrorism, we also use algorithms to “fan out” to try to identify related material that may also support terrorism. We use signals like whether an account is friends with a high number of accounts that have been disabled for terrorism, or whether an account shares the same attributes as a disabled account.
  • Recidivism: We’ve also gotten much faster at detecting new fake accounts created by repeat offenders. Through this work, we’ve been able to dramatically reduce the time period that terrorist recidivist accounts are on Facebook. This work is never finished because it is adversarial, and the terrorists are continuously evolving their methods too. We’re constantly identifying new ways that terrorist actors try to circumvent our systems — and we update our tactics accordingly.
  • Cross-platform collaboration: Because we don’t want terrorists to have a place anywhere in the family of Facebook apps, we have begun work on systems to enable us to take action against terrorist accounts across all our platforms, including WhatsApp and Instagram. Given the limited data some of our apps collect as part of their service, the ability to share data across the whole family is indispensable to our efforts to keep all our platforms safe.

Human Expertise

AI can’t catch everything. Figuring out what supports terrorism and what does not isn’t always straightforward, and algorithms are not yet as good as people when it comes to understanding this kind of context. A photo of an armed man waving an ISIS flag might be propaganda or recruiting material, but could be an image in a news story. Some of the most effective criticisms of brutal groups like ISIS utilize the group’s own propaganda against it. To understand more nuanced cases, we need human expertise.

  • Reports and reviews: Our community — that’s the people on Facebook — helps us by reporting accounts or content that may violate our policies — including the small fraction that may be related to terrorism. Our Community Operations teams around the world — which we are growing by 3,000 people over the next year — work 24 hours a day and in dozens of languages to review these reports and determine the context. This can be incredibly difficult work, and we support these reviewers with onsite counseling and resiliency training.
  • Terrorism and safety specialists: In the past year we’ve also significantly grown our team of counterterrorism specialists. At Facebook, more than 150 people are exclusively or primarily focused on countering terrorism as their core responsibility. This includes academic experts on counterterrorism, former prosecutors, former law enforcement agents and analysts, and engineers. Within this specialist team alone, we speak nearly 30 languages.
  • Real-world threats: We increasingly use AI to identify and remove terrorist content, but computers are not very good at identifying what constitutes a credible threat that merits escalation to law enforcement. We also have a global team that responds within minutes to emergency requests from law enforcement.

Partnering with Others

Working to keep terrorism off Facebook isn’t enough because terrorists can jump from platform to platform. That’s why partnerships with others — including other companies, civil society, researchers and governments — are so crucial.

  • Industry cooperation: In order to more quickly identify and slow the spread of terrorist content online, we joined with Microsoft, Twitter and YouTube six months ago to announce a shared industry database of “hashes” — unique digital fingerprints for photos and videos — for content produced by or in support of terrorist organizations. This collaboration has already proved fruitful, and we hope to add more partners in the future. We are grateful to our partner companies for helping keep Facebook a safe place.
  • Governments: Governments and inter-governmental agencies also have a key role to play in convening and providing expertise that is impossible for companies to develop independently. We have learned much through briefings from agencies in different countries about ISIS and Al Qaeda propaganda mechanisms. We have also participated in and benefited from efforts to support industry collaboration by organizations such as the EU Internet Forum, the Global Coalition Against Daesh, and the UK Home Office.
  • Encryption. We know that terrorists sometimes use encrypted messaging to communicate. Encryption technology has many legitimate uses – from protecting our online banking to keeping our photos safe. It’s also essential for journalists, NGO workers, human rights campaigners and others who need to know their messages will remain secure. Because of the way end-to-end encryption works, we can’t read the contents of individual encrypted messages — but we do provide the information we can in response to valid law enforcement requests, consistent with applicable law and our policies.
  • Counterspeech training: We also believe challenging extremist narratives online is a valuable part of the response to real world extremism. Counterspeech comes in many forms, but at its core these are efforts to prevent people from pursuing a hate-filled, violent life or convincing them to abandon such a life. But counterspeech is only effective if it comes from credible speakers. So we’ve partnered with NGOs and community groups to empower the voices that matter most.
  • Partner programs: We support several major counterspeech programs. For example, last year we worked with the Institute for Strategic Dialogue to launch the Online Civil Courage Initiative, a project that has engaged with more than 100 anti-hate and anti-extremism organizations across Europe. We’ve also worked with Affinis Labs to host hackathons in places like Manila, Dhaka and Jakarta, where community leaders joined forces with tech entrepreneurs to develop innovative solutions to push back against extremism and hate online. And finally, the program we’ve supported with the widest global reach is a student competition organized through the P2P: Facebook Global Digital Challenge. In less than two years, P2P has reached more than 56 million people worldwide through more than 500 anti-hate and extremism campaigns created by more than 5,500 university students in 68 countries.

Our Commitment

We want Facebook to be a hostile place for terrorists. The challenge for online communities is the same as it is for real world communities – to get better at spotting the early signals before it’s too late. We are absolutely committed to keeping terrorism off our platform, and we’ll continue to share more about this work as it develops in the future.

Read more about our new blog series Hard Questions. We want your input on what other topics we should address — and what we could be doing better. Please send suggestions to hardquestions@fb.com.

Categorías: Redes Sociales

Hard Questions

Facebook - Jue, 06/15/2017 - 14:00

By Elliot Schrage, Vice President for Public Policy and Communications

Today we’re starting something new.

Facebook is where people post pictures with their friends, get their news, form support groups and hold politicians to account. What started out as a way for college students in the United States to stay in touch is now used by nearly 2 billion people around the world. The decisions we make at Facebook affect the way people find out about the world and communicate with their loved ones.

It goes far beyond us. As more and more of our lives extend online, and digital technologies transform how we live, we all face challenging new questions — everything from how best to safeguard personal privacy online to the meaning of free expression to the future of journalism worldwide.

We debate these questions fiercely and freely inside Facebook every day — and with experts from around the world whom we consult for guidance. We take seriously our responsibility — and accountability — for our impact and influence.

We want to broaden that conversation. So today, we’re starting a new effort to talk more openly about some complex subjects. We hope this will be a place not only to explain some of our choices but also explore hard questions, such as:

  • How should platforms approach keeping terrorists from spreading propaganda online?
  • After a person dies, what should happen to their online identity?
  • How aggressively should social media companies monitor and remove controversial posts and images from their platforms? Who gets to decide what’s controversial, especially in a global community with a multitude of cultural norms?
  • Who gets to define what’s false news — and what’s simply controversial political speech?
  • Is social media good for democracy?
  • How can we use data for everyone’s benefit, without undermining people’s trust?
  • How should young internet users be introduced to new ways to express themselves in a safe environment?

As we proceed, we certainly don’t expect everyone to agree with all the choices we make. We don’t always agree internally. We’re also learning over time, and sometimes we get it wrong. But even when you’re skeptical of our choices, we hope these posts give a better sense of how we approach them — and how seriously we take them. And we believe that by becoming more open and accountable, we should be able to make fewer mistakes, and correct them faster.

Our first substantive post, later today, will be about responding to the spread of terrorism online — including the ways we’re working with others and using new technology.

We want your input on what other topics we should address — and what we could be doing better. Please send suggestions to hardquestions@fb.com.

Categorías: Redes Sociales

Celebrating 30 Years of the GIF

Facebook - Jue, 06/15/2017 - 05:59

On June 15, we’re celebrating the 30th anniversary of the GIF, which has made communicating on the internet more joyful, more visual and let’s face it, a whole lot funnier! To mark the big 3-0, we’re:

  • Taking an inside look at GIF popularity on Messenger
  • Announcing that GIFs in comments are now available to everyone on Facebook (yay!)
  • Introducing some new and exclusive GIFs we’ve created featuring some of the internet’s biggest stars
  • Asking you to help us answer the age-old debate of how to pronounce the word “GIF”

An Inside Look at GIFs in Messenger

With this milestone approaching, we took a look at how GIFs have transformed the way people communicate with each other since introducing GIFs in Messenger in 2015:

  • People on Messenger sent nearly 13 billion GIFs in the last year, or nearly 25,000 GIFs every minute
  • GIF sends on Messenger have tripled in the past year
  • New Year’s Day 2017 was the most popular day ever for GIF sends on Messenger, with more than 400 million GIF sends

GIFs in Facebook Comments are Finally Here!

We know people love communicating with GIFs on Messenger, and we’re also making it easier to use GIFs on Facebook. Today we’re introducing the ability to add GIFs in comments for all people on Facebook globally.

Just tap the GIF button when you go to make a comment, type in what you’re looking to say, and add the GIF that really nails it!

The GIF Party

We’re also celebrating the 30th anniversary the best way we know how — a GIF party with some of your favorite stars.

GIPHY Studios created 20 GIFs featuring some of the internet’s most recognizable faces: DNCE, Logan Paul, Amanda Cerny, DREEZY, Patrick Starr, Violet Benson, Wuz Good, Brandi Marie, and Landon Moss.

Each GIF is a unique and shareable morsel of human expression. They will be available to use by searching #GIFparty when sharing a GIF on Facebook or Messenger or by visiting GIPHY.com/Facebook.

Logan Paul

Violet Benson

Amanda Cerny

Landon Moss

Ending an Age-old Debate: How Do You Pronounce GIF?

Finally, we’re looking to solve the debate over how the word GIF is pronounced once and for all. Over the next few days, if you live in the US you might see a poll on Facebook asking you to cast your vote. You can also vote by visiting Facebook’s official Page on your mobile phone. To find the Page, search for “Facebook” in the main Facebook app.

We’ll report back here on whether the “hard g” or “soft g” pronunciation reigns supreme.

Categorías: Redes Sociales

Announcing Updates to Safety Check

Facebook - Mié, 06/14/2017 - 15:00

By Naomi Gleit, VP Social Good

As part of our ongoing commitment to build a safe community, today we’re announcing several updates to Safety Check:

  • Introducing Fundraisers in Safety Check: people in the US will have the option to start a fundraiser from within Safety Check
  • Expanding Community Help: Community Help will be available on desktop and for all crisis types where Safety Check is activated
  • Adding more context with a personal note: now people can share a personal note in their Safety Check News Feed story with friends and loved ones
  • Introducing crisis descriptions: get more information about a crisis from NC4, our trusted third party global crisis reporting agency, within the Safety Check tool

Introducing Fundraisers in Safety Check
Following a crisis, one way people give and request help is through fundraising. To make this easier, we are introducing Fundraisers in Safety Check. Within Safety Check, people will be able to create or donate to a fundraiser for charitable and personal causes to help those in need. Fundraising provides a way for people who are also outside of the crisis area to offer help. Fundraisers in Safety Check will start to roll out in the coming weeks in the US.

Expanding Community Help
Since we launched Community Help earlier this year on iOS and Android, we have been inspired by the offers and requests for help generated by the community and want to make sure that those in need are able to access Community Help through any platform. Community Help will be available in the upcoming weeks on desktop, giving people another way to access the tool. Additionally, Community Help is now available for all crises where Safety Check is activated.

Adding more context with a personal note
After marking themselves safe, people share additional information to help reassure friends they are safe and to provide more context about the crisis. To make this easier, people can now add a personal note to tell their friends more about what’s happening from within the Safety Check tool. This note will appear in the News Feed story that is automatically generated when people mark themselves safe.

Introducing crisis descriptions
When people receive Safety Check notifications, they may have limited information about the crisis. To help provide additional context on crises and make sure people have the information that they need, we have started adding descriptions about the crisis from NC4, our trusted third party global crisis reporting agency.

Safety Check has been activated more than 600 times in two years and has notified people that their families and friends are safe more than a billion times. Keeping the community safe means everything to us at Facebook and we hope that these updates to Safety Check continue to do just that.

Categorías: Redes Sociales

Using Data to Help Communities Recover and Rebuild

Facebook - Mié, 06/07/2017 - 18:00

By Molly Jackman, Public Policy Research Manager

After a flood, fire, earthquake or other natural disaster, response organizations need accurate information, and every minute counts in saving lives. Traditional communication channels are often offline and it can take significant time and resources to understand where help is desperately needed.

Facebook can help response organizations paint a more complete picture of where affected people are located so they can determine where resources — like food, water and medical supplies — are needed and where people are out of harm’s way.

Today, we are introducing disaster maps that use aggregated, de-identified Facebook data to help organizations address the critical gap in information they often face when responding to natural disasters. Many of these organizations worked with us to identify what data would be most helpful and how it could be put to action in the moments following a disaster.

This initiative is the product of close work with UNICEF, the International Federation of the Red Cross and Red Crescent Societies, the World Food Programme, and other organizations. It is an example of how technology can help keep people safe, one of our five areas of focus as we help build a global community.

Based on these organizations’ feedback we are providing multiple types of maps during disaster response efforts, which will include aggregated location information people have chosen to share with Facebook.

Location density maps show where people are located before, during and after a disaster. We can compare this information to historical records, like population estimates based on satellite images. Comparing these data sets can help response organizations understand areas impacted by a natural disaster.

Movement maps illustrate patterns of movement between different neighborhoods or cities over a period of several hours. By understanding these patterns, response organizations can better predict where resources will be needed, gain insight into patterns of evacuation, or predict where traffic will be most congested.

Safety Check maps are based on where our community uses Safety Check to notify their friends and family that they are safe during a disaster. We are using this de-identified data in aggregate to show where more or fewer people check in safe, which may help organizations understand where people are most vulnerable and where help is needed.

This type of information can help response organizations understand which neighborhoods suffered the most damage following an earthquake and where people might be in need of help as they evacuate their homes and eventually return.

We are sharing this information with trusted organizations that have capacity to act on the data and respect our privacy standards, starting with UNICEF, the International Federation of the Red Cross and Red Crescent Societies, and the World Food Programme. We are working with these organizations to establish formal processes for responsibly sharing the datasets with others.

Over time, we intend to make it possible for additional organizations and governments to participate in this program. All applications will be reviewed carefully by people at Facebook, including those with local expertise.

We believe that our platform is a valuable source of information that can help response organizations serve people more efficiently and effectively. Ultimately, we hope this data helps communities have the information they need to recover and rebuild if disaster strikes.

Categorías: Redes Sociales

Making Facebook Live More Accessible With Closed Captions

Facebook - Mar, 06/06/2017 - 18:45

By Supratik Lahiri, Product Manager, and Jeffrey Wieland, Director of Accessibility

Making Facebook accessible to everyone is a key part of building global community. Today we’re allowing publishers to include closed captions in Facebook Live, helping people who are deaf or hard of hearing to experience live videos. Now, if your captioning settings are turned on, you’ll automatically see closed captions on Live broadcasts when they’re available.

Over the past year, daily watch time for Facebook Live broadcasts has grown by more than 4x, and 1 in 5 Facebook videos is a Live broadcast. By enabling publishers to include closed captions with their Live broadcasts, we hope more people can now participate in the exciting moments that unfold on Live.

Today’s milestone represents the next step in our efforts to make content on Facebook accessible to more people. It’s already possible to add captions to non-live videos when uploading them to Facebook Pages, and publishers can use our speech recognition service to automatically generate captions for videos on their Pages.

For more information on adding closed captions to Facebook Live broadcasts, click here. For more information on Facebook’s accessibility features and settings, click here, and follow news and updates from the Facebook Accessibility team here.

Categorías: Redes Sociales

Facebook Celebrates Pride Month

Facebook - Lun, 06/05/2017 - 15:00

By Alex Schultz, VP & Executive Sponsor of pride@facebook

As Pride celebrations begin around the world, Facebook is proud to support our diverse community, including those that have identified themselves on Facebook as gay, lesbian, bi-sexual, transgender or gender non-conforming. In fact, this year, over 12 million people across the globe are part of one of the 76,000 Facebook Groups in support of the LGBTQ community, and more than 1.5 million people plan to participate in one of the more than 7,500 Pride events on Facebook.

This year, we’re excited to unveil more ways than ever before for people to show their pride and support for the LGBTQ community on Facebook:

Update Your Profile Pic with a Rainbow Frame
Throughout the month of June, you might see a message from Facebook in your News Feed wishing you a Happy Pride and inviting you to add a colorful, Pride-themed profile frame. Additionally, you might also see a special animation on top of your News Feed if you happen to react to our message.

React with Pride
You may see a colorful, limited-edition Pride Reaction during Pride Month. When you choose this temporary rainbow reaction, you’ll be expressing your “Pride” to the post.


Brighten Up Your Photos
In Facebook Camera, you can find some new colorful, Pride-themed masks and frames. If you swipe to the left of News Feed, click on the magic wand to bring up camera effects and you’ll be able to find the effects in the mask and frame category.


Support an LGBTQ Cause
In the US, start a Facebook Fundraiser or donate to your favorite LGBTQ cause. On Facebook, you can raise money for a nonprofit or people — for yourself, a friend or someone or something not on Facebook.

Facebook isn’t the only place to celebrate the cause. All across our entire family of apps, you will have the opportunity to show your support:

Join the #KindComments Movement on Instagram
The photo sharing app is committed to fostering a safer and kinder community, and this June will be turning walls in major US cities into colorful beacons of LGBTQ support where you can leave supportive comments on your posts. You can also celebrate Pride and be creative with stickers and a rainbow brush.


Frame Up with Pride on Messenger
During Pride month, you can add some love to your conversations with friends and family with Pride-themed stickers, frames, and effects in the Messenger Camera.

Our Commitment and Participation
Facebook has long been a supporter of LGBTQ rights, through our products, policies and benefits to our employees. Not only will we be a part of Pride activities in more than 20 cities around the world, including in San Francisco, where we first marched in 2011, but we will also celebrate with our employees by hosting events and discussions, as well as by draping the Facebook monument outside the Menlo Park headquarters in the rainbow flag, as the company has done each year since 2012.

Our commitment and support of the LGBTQ community has been unwavering. From our support of marriage equality and bullying prevention, to the many product experiences that we’ve brought to life, we are proud of our attention to the LGBTQ experience on Facebook, often thanks to the many LGBTQ people and allies who work here.

Last year, for the first time ever, we began publicly sharing self-reported data around our LGBTQ community at Facebook. In a recent, voluntary survey of our employees in the US about sexual orientation and gender identity, to which 67% responded, 7% self-identified as being lesbian, gay, bisexual, queer, transgender or asexual. We are proud to support the LGBTQ community, and while more work still remains, we are eager to be active partners going forward.

Happy Pride!

Categorías: Redes Sociales

Update on Trending

Facebook - Mié, 05/24/2017 - 19:00

By Ali Ahmadi, Product Manager, and John Angelo, Product Designer

Redesigned Trending Results Page

Starting today, we’re introducing a redesigned Trending results page, which is the page you see when you click on a Trending topic to learn more about it.

You’ve always been able to click on a topic to see related posts and stories, but we’ve redesigned the page to make it easier to discover other publications that are covering the story, as well as what your friends and public figures are saying about it.

You’ll be able to see the new results page on iPhone in the US, and we plan to make it available on Android and desktop soon.

Now, when you click on a Trending topic, you’ll see a carousel with stories from other publications about a given topic that you can swipe through. By making it easier to see what other news outlets are saying about each topic, we hope that people will feel more informed about the news in their region.

The stories that appear in this section are some of the most popular stories about that topic on Facebook. These stories are determined the same way as the featured headline — using a combination of factors including the engagement around the article on Facebook, the engagement around the publisher overall, and whether other articles are linking to it.

There is no predetermined list of publications that are eligible to appear in Trending and this update does not affect how Trending topics are identified, which we announced earlier this year.

Making Trending Easier to Discover On Mobile

One of the things we regularly hear from people who use Trending is that it can be difficult to find in the Facebook mobile app. We’re soon beginning a test in News Feed that will show people the top three Trending stories, which they can click on to see the full list of Trending topics and explore what people are discussing on Facebook.

While most people will not see Trending in their News Feed as part of this small test, we hope that it will help us learn how to make Trending as useful and informative for people as possible. If you do see the Trending unit in your News Feed, you have the option to remove it in the drop-down menu which will prevent it from being shown to you in the future.

As before, we continue to listen to feedback about Trending and will keep making improvements in order to provide a valuable experience.

Categorías: Redes Sociales

Expanding Facebook Fundraisers to More People and Causes

Facebook - Mié, 05/24/2017 - 15:05

By Naomi Gleit, VP Social Good

Facebook is a place where people come together to connect with their communities and support one another in meaningful ways. Today, we are giving people another way to mobilize around causes they care about by expanding personal fundraisers to everyone over 18 in the US and by adding two new categories – community and sports.

We began testing personal fundraisers, a new product that allows people to raise money for a friend, themselves or a sick pet directly on Facebook, in March. Since then, we’ve been inspired by the response to create them and the support felt by those they benefit.

People can create a fundraiser to quickly raise money on Facebook and easily reach their friends in a few taps, without leaving Facebook, and can share fundraisers to help build momentum. People can learn about the person who created the fundraiser and the person benefiting from the fundraiser, as well as see which friends have donated. Now people can raise money for any of the following categories:

    • Education: such as tuition, books or classroom supplies
    • Medical: such as medical procedures, treatments or injuries
    • Pet Medical: such as veterinary procedures, treatments or injuries
    • Crisis Relief: such as public crises or natural disasters
    • Personal Emergency: such as a house fire, theft or car accident
    • Funeral and Loss: such as burial expenses or living costs after losing a loved one
    • Sports: such as equipment, competitions or team fees
    • Community: such as neighborhood services, community improvements or environmental improvements

Nonprofit fundraisers continue to be available for people on Facebook to raise funds and awareness for 501(c)(3) nonprofits.

It’s easy to get started:

  1. On mobile, tap the menu icon and select Fundraisers, or on desktop, go to facebook.com/fundraisers
  2. Choose to raise money for a Friend, Yourself or Someone or Something Not on Facebook
  3. Give your fundraiser a title and compelling story, and start raising money

All fundraisers are reviewed within 24 hours. Personal fundraisers are available on all devices, and have a 6.9% + $0.30 fee that goes to payment processing, fundraiser vetting, and security and fraud protection. Facebook’s goal is to create a platform for good that’s sustainable over the long-term, and not to make a profit from our charitable giving tools.

We’re constantly inspired by the good people on Facebook do, and we’re excited to learn more about how people use this new product so we can continue improving the experience.

Find out more about Facebook fundraisers at facebook.com/fundraisers.

Categorías: Redes Sociales

More Ways To Connect with Friends in Facebook Live

Facebook - Mar, 05/23/2017 - 18:00

By Erin Connolly, Product Manager, and Fred Beteille, Product Manager

We know Facebook Live is better with friends. We’ve been working on ways to make Live more fun, social and interactive, like with the new Live interactive effects we announced last month. Today we’re excited to announce two new features that make it easier to share experiences and connect in real time with your friends on Live.

Live Chat With Friends

One of the best things about Live is that you can discuss what’s happening in the broadcast in real time. In fact, people comment more than 10 times more on Facebook Live videos than on regular videos. When it comes to compelling public broadcasts — such as a breaking news event, a Q&A with your favorite actor or behind-the-scenes action after a big game — watching with the community and reading comments is an exciting part of the experience. We know sometimes people also want the option to interact with only their friends during a public live broadcast, so we’re rolling out Live Chat With Friends.

Live Chat With Friends lets you invite friends to a private chat about a public live broadcast. You can invite friends who are already watching or other friends who you think may want to tune in. You’re able to jump back into the public conversation at any time, and you can still continue chatting with your friends via Messenger after the broadcast ends.

With Live Chat With Friends, you can be part of big moments with the wider community but also have the option to participate in personal conversations with the people closest to you, directly within the Live experience. We’re testing this feature on mobile in several countries, and we look forward to making it available more broadly later this summer.

Live With
Last year we started rolling out the ability for public figures to go live with a guest. Now available for all profiles and Pages on iOS, Live With lets you invite a friend into your live video so you can hang out together, even if you’re not in the same place. Sharing the screen with a friend can make going live more fun and interactive — for both you and your viewers.

To invite a friend to join you in your live video, simply select a guest from the Live Viewers section, or tap a comment from the viewer you want to invite. Your viewer can then choose whether or not to join your broadcast. You can go live with a guest in both portrait mode (for a picture-in-picture experience) and landscape mode (for a side-by-side experience). For a full tutorial, click here.

We’re excited to see how people use these Facebook Live features to come together around moments big and small.

Categorías: Redes Sociales

Facebook’s Community Standards: How and Where We Draw the Line

Facebook - Mar, 05/23/2017 - 15:00

By Monika Bickert, Head of Global Policy Management

Last month, people shared several horrific videos on Facebook of Syrian children in the aftermath of a chemical weapons attack. The videos, which also appeared elsewhere on the internet, showed the children shaking, struggling to breathe and eventually dying.

The images were deeply shocking – so much so that we placed a warning screen in front of them. But the images also prompted international outrage and renewed attention on the plight of Syrians.

Reviewing online material on a global scale is challenging and essential. As the person in charge of doing this work for Facebook, I want to explain how and where we draw the line.

On an average day, more than a billion people use Facebook. They share posts in dozens of languages: everything from photos to live videos. A very small percentage of those will be reported to us for investigation. The range of issues is broad – from bullying and hate speech to terrorism – and complex. Designing policies that both keep people safe and enable them to share freely means understanding emerging social issues and the way they manifest themselves online, and being able to respond quickly to millions of reports a week from people all over the world.

For our reviewers, there is another hurdle: understanding context. It’s hard to judge the intent behind one post, or the risk implied in another. Someone posts a graphic video of a terrorist attack. Will it inspire people to emulate the violence, or speak out against it? Someone posts a joke about suicide. Are they just being themselves, or is it a cry for help?

In the UK, being critical of the monarchy might be acceptable. In some parts of the world it will get you a jail sentence. Laws can provide guidance, but often what’s acceptable is more about norms and expectations. New ways to tell stories and share images can bring these tensions to the surface faster than ever.

We aim to keep our site safe. We don’t always share the details of our policies, because we don’t want to encourage people to find workarounds – but we do publish our Community Standards, which set out what is and isn’t allowed on Facebook, and why.

Our standards change over time. We are in constant dialogue with experts and local organizations, on everything from child safety to terrorism to human rights.  Sometimes this means our policies can seem counterintuitive. As the Guardian reported, experts in self-harm advised us that it can be better to leave live videos of self-harm running so that people can be alerted to help, but to take them down afterwards to prevent copycats. When a girl in Georgia, USA, attempted suicide on Facebook Live two weeks ago, her friends were able to notify police, who managed to reach her in time.

We try hard to stay objective. The cases we review aren’t the easy ones: they are often in a grey area where people disagree. Art and pornography aren’t always easily distinguished, but we’ve found that digitally generated images of nudity are more likely to be pornographic than handmade ones, so our policy reflects that.

There’s a big difference between general expressions of anger and specific calls for a named individual to be harmed, so we allow the former but don’t permit the latter.

These tensions – between raising awareness of violence and promoting it, between freedom of expression and freedom from fear, between bearing witness to something and gawking at it – are complicated, and there are rarely universal legal standards to provide clarity. Being as objective as possible is the only way we can be consistent across the world. But we still sometimes end up making the wrong call.

The hypothetical situations we use to train reviewers are intentionally extreme. They’re designed to help the people who do this work deal with the most difficult cases. When we first created our content standards nearly a decade ago, much was left to the discretion of individual employees. But because no two people will have identical views of what defines hate speech or bullying – or any number of other issues – we now include clear definitions.

We face criticism from people who want more censorship and people who want less. We see that as a useful signal that we are not leaning too far in any one direction.

I hope that readers will understand that we take our role extremely seriously. For many of us on the team within Facebook, safety is a passion that predates our work at the company: I spent more than a decade as a criminal prosecutor, investigating everything from child sexual exploitation to terrorism. Our team also includes a counter extremism expert from the UK, the former research director of West Point’s Combating Terrorism Center, a rape crisis center worker, and a teacher.

All of us know there is more we can do. Last month, we announced that we are hiring an extra 3,000 reviewers. This is demanding work, and we will continue to do more to ensure we are giving them the right support, both by making it easier to escalate hard decisions quickly and by providing the psychological support they need.

Technology has given more people more power to communicate more widely than ever before. We believe the benefits of sharing far outweigh the risks. But we also recognize that society is still figuring out what is acceptable and what is harmful, and that we, at Facebook, can play an important part of that conversation.

Categorías: Redes Sociales

News Feed FYI: New Updates to Reduce Clickbait Headlines

Facebook - Mié, 05/17/2017 - 19:00

By Arun Babu, Engineer, Annie Liu, Engineer, and Jordan Zhang, Engineer

People tell us they don’t like stories that are misleading, sensational or spammy. That includes clickbait headlines that are designed to get attention and lure visitors into clicking on a link. In an effort to support an informed community, we’re always working to determine what stories might have clickbait headlines so we can show them less often.

Last year we made an update to News Feed to reduce stories from sources that consistently post clickbait headlines that withhold and exaggerate information. Today, we are making three updates that build on this work so that people will see even fewer clickbait stories in their feeds, and more of the stories they find authentic.

  • First, we are now taking into account clickbait at the individual post level in addition to the domain and Page level, in order to more precisely reduce clickbait headlines.
  • Second, in order to make this more effective, we are dividing our efforts into two separate signals — so we will now look at whether a headline withholds information or if it exaggerates information separately.
  • Third, we are starting to test this work in additional languages.

How We Are Improving Our Efforts

One of our News Feed values is authentic communication, so we’ve been working to understand what people find authentic and what people do not.

We’ve learned from last year’s update that we can better detect different kinds of clickbait headlines by separately — rather than jointly — identifying signals that withhold or exaggerate information.

Headlines that withhold information intentionally leave out crucial details or mislead people, forcing them to click to find out the answer. For example, “When She Looked Under Her Couch Cushions And Saw THIS…” Headlines that exaggerate the details of a story with sensational language tend to make the story seem like a bigger deal than it really is. For example, “WOW! Ginger tea is the secret to everlasting youth. You’ve GOT to see this!”

We addressed this similarly to how we previously worked to reduce clickbait: We categorized hundreds of thousands of headlines as clickbait or not clickbait by considering if the headline exaggerates the details of a story, and separately if the headline withholds information. A team at Facebook reviewed thousands of headlines using these criteria, validating each other’s work to identify large sets of clickbait headlines.

From there, we identify what phrases are commonly used in clickbait headlines that are not used in other headlines. This is similar to how many email spam filters work.

Posts with clickbait headlines will appear lower in News Feed. We will continue to learn over time, and we hope to continue expanding this work to reduce clickbait in even more languages.

Will This Impact My Page?

We anticipate that most Pages won’t see any significant changes to their distribution in News Feed as a result of this update.

Publishers that rely on clickbait headlines should expect their distribution to decrease. Pages should avoid headlines that withhold information required to understand the content of the article and headlines that exaggerate the article to create misleading expectations. If a Page stops posting clickbait and sensational headlines, their posts will stop being impacted by this change.

As always, Pages should refer to our publishing best practices. We will learn from these changes and will continue to work on reducing clickbait so News Feed is a place for authentic communication.

Categorías: Redes Sociales

Connecting People With Mental Health Resources and Building a Safer Community

Facebook - Mar, 05/16/2017 - 22:00

By Antigone Davis, Global Head of Safety

May is Mental Health Awareness Month in the US, and this month Facebook is letting people know about our tools and resources we have developed for people who may be struggling. People may see videos or photos in News Feed for a broad awareness campaign about supportive groups, crisis support over Messenger and suicide prevention tools.

We’ve been committed to mental health support for many years, and this is one of the ways we’re working to build a safer and more supportive community on Facebook. As we continue to invest in new tools and resources, we hope Facebook can help provide support to more people over time. For example, Mama Dragons, a Utah community of mothers with LGBTQ children, uses Facebook Groups to share experiences and offer support.

Finding Supportive Groups

On Facebook, people can connect to groups that support them through difficult times. Throughout May, we’ll be helping more people find groups about mental health and well-being.

Crisis Support Over Messenger

People can talk in real time with trained crisis and mental health support volunteers over Messenger. Participating organizations include Crisis Text Line, the National Eating Disorder Association, Partnership for Drug-Free Kids and the National Suicide Prevention Lifeline. We are also happy to announce that we will be adding The Trevor Project, an organization focused on crisis intervention and suicide prevention for LGBTQ youth. The option will roll out over the next few months.

Suicide Prevention Tools and Resources

We’ve offered suicide prevention tools on Facebook for more than 10 years. We developed these in collaboration with mental health organizations such as Save.org, National Suicide Prevention Lifeline, Forefront and Crisis Text Line, and with people who have personal experience thinking about or attempting suicide. Last year we expanded the availability of these tools worldwide with the help of over 70 partners, and we’ve improved them based on new technology and feedback from the community.

This month Instagram is also helping to raise awareness about mental health and the communities of support that exist on the platform. To learn more about the tools and resources available on Instagram and the #HereForYou initiative, visit instagram-together.com.

Together, we hope these resources help more people who may be struggling and and we’re continuously improving them to build a safer and more supportive community on Facebook.

Categorías: Redes Sociales

Video Carousel Ads on Smartphone Mobile Web

Facebook - Mar, 05/16/2017 - 19:00

During our regular reviews to ensure the accuracy of our systems, we recently found and fixed a bug that misattributed some clicks on video carousel ads as link clicks. This bug occurred when people were on mobile web browsers on smartphones — not on desktop or in the Facebook mobile app.

The bug affected billing only for the following conditions: for the video carousel ad unit; when the advertiser chose to bid on link clicks; and only for people who were on smartphone web browsers. In these cases, instead of being billed only for link clicks (clicks to an advertiser’s selected destination), these advertisers were incorrectly billed when people clicked on the videos in the carousel to enlarge and watch them. Advertisers will receive a full credit for the charges they incurred for these misattributed clicks.

Most consumers use Facebook through the app on their phones, and mobile web browser ad impressions make up a small percentage of the overall ads impressions people see on Facebook. Given that this bug related to mobile web for smartphones only, and specifically for video carousel ads that bid on link clicks, the impact from a billing perspective was 0.04% of ads impressions. Regardless of how many impressions were affected, we take all bugs seriously and apologize for any inconvenience this has caused.

Categorías: Redes Sociales

Páginas

Subscribe to Develop Site agregador: Redes Sociales