You are here

Distribuir contenido

Two Billion People Coming Together on Facebook

Facebook - Mar, 06/27/2017 - 19:08

By Mike Nowak, Product Director, and Guillermo Spiller, Product Manager

As Mark Zuckerberg announced today, we reached a new milestone: there are now 2 billion people connecting and building communities on Facebook every month.

This wouldn’t have happened without the millions of smaller communities and individuals who are sharing and making meaningful contributions every day. Each day, more than 175 million people share a Love reaction, and on average, over 800 million people like something on Facebook. More than 1 billion people use Groups every month.

To show our appreciation for the many ways people support one another on Facebook, we will share several personalized experiences over the coming days.

Good Adds Up Video

We are launching a personalized video to celebrate bringing the world closer together. You may see your video in your News Feed or by visiting facebook.com/goodaddsup.

Celebrating the Good People Do

After someone reacts to a friend’s post with Love, wishes someone happy birthday or creates a group, they will see a message in News Feed thanking them.

Sharing Community Stories and Impact

On facebook.com/goodaddsup, we are featuring fun facts about how people are contributing to the community. In the US, we are also sharing stories of people who inspire us. Every day, people connect with one another, contribute to their local communities and help make the world a better place.

We want to help do our part as well. As Mark mentioned last week at the Facebook Communities Summit, our mission is to bring the world closer together. Reaching this milestone is just one small step toward that goal. We are excited to continue to build products that allow people to connect with one another, regardless of where they live or what language they speak.

Thank you for being part of our global community!

Categorías: Redes Sociales

Hard Questions: Hate Speech

Facebook - Mar, 06/27/2017 - 14:00

Who should decide what is hate speech in an online global community?
By Richard Allan, VP EMEA Public Policy

As more and more communication takes place in digital form, the full range of public conversations are moving online — in groups and broadcasts, in text and video, even with emoji. These discussions reflect the diversity of human experience: some are enlightening and informative, others are humorous and entertaining, and others still are political or religious. Some can also be hateful and ugly. Most responsible communications platforms and systems are now working hard to restrict this kind of hateful content.

Facebook is no exception. We are an open platform for all ideas, a place where we want to encourage self-expression, connection and sharing. At the same time, when people come to Facebook, we always want them to feel welcome and safe. That’s why we have rules against bullying, harassing and threatening someone.

But what happens when someone expresses a hateful idea online without naming a specific person? A post that calls all people of a certain race “violent animals” or describes people of a certain sexual orientation as “disgusting” can feel very personal and, depending on someone’s experiences, could even feel dangerous. In many countries around the world, those kinds of attacks are known as hate speech. We are opposed to hate speech in all its forms, and don’t allow it on our platform.

In this post we want to explain how we define hate speech and approach removing it — as well as some of the complexities that arise when it comes to setting limits on speech at a global scale, in dozens of languages, across many cultures. Our approach, like those of other platforms, has evolved over time and continues to change as we learn from our community, from experts in the field, and as technology provides us new tools to operate more quickly, more accurately and precisely at scale.

Defining Hate Speech

The first challenge in stopping hate speech is defining its boundaries.

People come to Facebook to share their experiences and opinions, and topics like gender, nationality, ethnicity and other personal characteristics are often a part of that discussion. People might disagree about the wisdom of a country’s foreign policy or the morality of certain religious teachings, and we want them to be able to debate those issues on Facebook. But when does something cross the line into hate speech?

Our current definition of hate speech is anything that directly attacks people based on what are known as their “protected characteristics” — race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender, gender identity, or serious disability or disease.

There is no universally accepted answer for when something crosses the line. Although a number of countries have laws against hate speech, their definitions of it vary significantly.

In Germany, for example, laws forbid incitement to hatred; you could find yourself the subject of a police raid if you post such content online. In the US, on the other hand, even the most vile kinds of speech are legally protected under the US Constitution.

People who live in the same country — or next door — often have different levels of tolerance for speech about protected characteristics. To some, crude humor about a religious leader can be considered both blasphemy and hate speech against all followers of that faith. To others, a battle of gender-based insults may be a mutually enjoyable way of sharing a laugh. Is it OK for a person to post negative things about people of a certain nationality as long as they share that same nationality? What if a young person who refers to an ethnic group using a racial slur is quoting from lyrics of a song?

There is very important academic work in this area that we follow closely. Timothy Garton Ash, for example, has created the Free Speech Debate to look at these issues on a cross-cultural basis. Susan Benesch established the Dangerous Speech Project, which investigates the connection between speech and violence. These projects show how much work is left to be done in defining the boundaries of speech online, which is why we’ll keep participating in this work to help inform our policies at Facebook.

Enforcement

We’re committed to removing hate speech any time we become aware of it. Over the last two months, on average, we deleted around 66,000 posts reported as hate speech per week — that’s around 288,000 posts a month globally. (This includes posts that may have been reported for hate speech but deleted for other reasons, although it doesn’t include posts reported for other reasons but deleted for hate speech.*)

But it’s clear we’re not perfect when it comes to enforcing our policy. Often there are close calls — and too often we get it wrong.

Sometimes, it’s obvious that something is hate speech and should be removed – because it includes the direct incitement of violence against protected characteristics, or degrades or dehumanizes people. If we identify credible threats of imminent violence against anyone, including threats based on a protected characteristic, we also escalate that to local law enforcement.

But sometimes, there isn’t a clear consensus — because the words themselves are ambiguous, the intent behind them is unknown or the context around them is unclear. Language also continues to evolve, and a word that was not a slur yesterday may become one today.

Here are some of the things we take into consideration when deciding what to leave on the site and what to remove.

Context

What does the statement “burn flags not fags” mean? While this is clearly a provocative statement on its face, should it be considered hate speech? For example, is it an attack on gay people, or an attempt to “reclaim” the slur? Is it an incitement of political protest through flag burning? Or, if the speaker or audience is British, is it an effort to discourage people from smoking cigarettes (fag being a common British term for cigarette)? To know whether it’s a hate speech violation, more context is needed.

Often the most difficult edge cases involve language that seems designed to provoke strong feelings, making the discussion even more heated — and a dispassionate look at the context (like country of speaker or audience) more important. Regional and linguistic context is often critical, as is the need to take geopolitical events into account. In Myanmar, for example, the word “kalar” has benign historic roots, and is still used innocuously across many related Burmese words. The term can however also be used as an inflammatory slur, including as an attack by Buddhist nationalists against Muslims. We looked at the way the word’s use was evolving, and decided our policy should be to remove it as hate speech when used to attack a person or group, but not in the other harmless use cases. We’ve had trouble enforcing this policy correctly recently, mainly due to the challenges of understanding the context; after further examination, we’ve been able to get it right. But we expect this to be a long-term challenge.

In Russia and Ukraine, we faced a similar issue around the use of slang words the two groups have long used to describe each other. Ukrainians call Russians “moskal,” literally “Muscovites,” and Russians call Ukrainians “khokhol,” literally “topknot.” After conflict started in the region in 2014, people in both countries started to report the words used by the other side as hate speech. We did an internal review and concluded that they were right. We began taking both terms down, a decision that was initially unpopular on both sides because it seemed restrictive, but in the context of the conflict felt important to us.

Often a policy debate becomes a debate over hate speech, as two sides adopt inflammatory language. This is often the case with the immigration debate, whether it’s about the Rohingya in South East Asia, the refugee influx in Europe or immigration in the US. This presents a unique dilemma: on the one hand, we don’t want to stifle important policy conversations about how countries decide who can and can’t cross their borders. At the same time, we know that the discussion is often hurtful and insulting.

When the influx of migrants arriving in Germany increased in recent years, we received feedback that some posts on Facebook were directly threatening refugees or migrants. We investigated how this material appeared globally and decided to develop new guidelines to remove calls for violence against migrants or dehumanizing references to them — such as comparisons to animals, to filth or to trash. But we have left in place the ability for people to express their views on immigration itself. And we are deeply committed to making sure Facebook remains a place for legitimate debate.

Intent

People’s posts on Facebook exist in the larger context of their social relationships with friends. When a post is flagged for violating our policies on hate speech, we don’t have that context, so we can only judge it based on the specific text or images shared. But the context can indicate a person’s intent, which can come into play when something is reported as hate speech.

There are times someone might share something that would otherwise be considered hate speech but for non-hateful reasons, such as making a self-deprecating joke or quoting lyrics from a song. People often use satire and comedy to make a point about hate speech.

Or they speak out against hatred by condemning someone else’s use of offensive language, which requires repeating the original offense. This is something we allow, even though it might seem questionable since it means some people may encounter material disturbing to them. But it also gives our community the chance to speak out against hateful ideas. We revised our Community Standards to encourage people to make it clear when they’re sharing something to condemn it, but sometimes their intent isn’t clear, and anti-hatred posts get removed in error.

On other occasions, people may reclaim offensive terms that were used to attack them. When someone uses an offensive term in a self-referential way, it can feel very different from when the same term is used to attack them. For example, the use of the word “dyke” may be considered hate speech when directed as an attack on someone on the basis of the fact that they are gay. However, if someone posted a photo of themselves with #dyke, it would be allowed. Another example is the word “faggot.” This word could be considered hate speech when directed at a person, but, in Italy, among other places, “frocio” (“faggot”) is used by LGBT activists to denounce homophobia and reclaim the word. In these cases, removing the content would mean restricting someone’s ability to express themselves on Facebook.

Mistakes

If we fail to remove content that you report because you think it is hate speech, it feels like we’re not living up to the values in our Community Standards. When we remove something you posted and believe is a reasonable political view, it can feel like censorship. We know how strongly people feel when we make such mistakes, and we’re constantly working to improve our processes and explain things more fully.

Our mistakes have caused a great deal of concern in a number of communities, including among groups who feel we act — or fail to act — out of bias. We are deeply committed to addressing and confronting bias anywhere it may exist. At the same time, we work to fix our mistakes quickly when they happen.

Last year, Shaun King, a prominent African-American activist, posted hate mail he had received that included vulgar slurs. We took down Mr. King’s post in error — not recognizing at first that it was shared to condemn the attack. When we were alerted to the mistake, we restored the post and apologized. Still, we know that these kinds of mistakes are deeply upsetting for the people involved and cut against the grain of everything we are trying to achieve at Facebook.

Continuing To Improve

People often ask: can’t artificial intelligence solve this? Technology will continue to be an important part of how we try to improve. We are, for example, experimenting with ways to filter the most obviously toxic language in comments so they are hidden from posts. But while we’re continuing to invest in these promising advances, we’re a long way from being able to rely on machine learning and AI to handle the complexity involved in assessing hate speech.

That’s why we rely so heavily on our community to identify and report potential hate speech. With billions of posts on our platform — and with the need for context in order to assess the meaning and intent of reported posts — there’s not yet a perfect tool or system that can reliably find and distinguish posts that cross the line from expressive opinion into unacceptable hate speech. Our model builds on the eyes and ears of everyone on platform — the people who vigilantly report millions of posts to us each week for all sorts of potential violations. We then have our teams of reviewers, who have broad language expertise and work 24 hours a day across time zones, to apply our hate speech policies.

We’re building up these teams that deal with reported content: over the next year, we’ll add 3,000 people to our community operations team around the world, on top of the 4,500 we have today. We’ll keep learning more about local context and changing language. And, because measurement and reporting are an important part of our response to hate speech, we’re working on better ways to capture and share meaningful data with the public.

Managing a global community in this manner has never been done before, and we know we have a lot more work to do. We are committed to improving — not just when it comes to individual posts, but how we approach discussing and explaining our choices and policies entirely.

Read more about our new blog series Hard Questions. We want your input on what other topics we should address — and what we could be doing better. Please send suggestions to hardquestions@fb.com.

*What’s in the numbers:

  • These numbers represent an average from April and May 2017.
  • These numbers reflect content that was reported for hate speech and subsequently deleted, whatever the reason.
  • The numbers are specific to reports on individual posts on Facebook.
    • These numbers do not include hate speech deleted from Instagram.
    • These numbers do not include hate speech that was deleted because an entire page, group or profile was taken down or disabled. This means we could be drastically undercounting because a hateful group may contain many individual items of hate speech.
    • These numbers do not include hate speech that was reported for other reasons.
      • For example, outrageous statements can be used to get people to click on spam links and with our current definitions if this was reported for spam we do not track it as hate speech.
      • For example, if a post was reported for nudity or bullying, but deleted for hate speech, it would not be counted in these numbers.
    • These numbers might include content that was reported for hate, but deleted for other reasons.
      • For example, if a post was reported for hate speech, but deleted for nudity or bullying, it would be counted in these numbers.
    • These numbers also contain instances when we may have taken down content mistakenly.
  • The numbers vary dramatically over time due to offline events (like the aftermath of a terror attack) or online events (like a spam attack).
  • We are exploring a better process by which to log our reports and removals, for more meaningful and accurate data.
Categorías: Redes Sociales

How to turn a Raspberry Pi into an eBook server

Open Source - Mar, 06/27/2017 - 09:02

Recently Calibre 3.0 was released which enables users to read books in the browser! Note that Raspbian's repositories have not yet been updated yet (as of this writing).


read more
Categorías: Open Source

An introduction to functional programming in JavaScript

Open Source - Mar, 06/27/2017 - 09:01

When Brendan Eich created JavaScript in 1995, he intended to do Scheme in the browser. Scheme, being a dialect of Lisp, is a functional programming language. Things changed when Eich was told that the new language should be the scripting language companion to Java. Eich eventually settled on a language that has a C-style syntax (as does Java), yet has first-class functions. Java technically did not have first-class functions until version 8, however you could simulate first-class functions using anonymous classes.


read more
Categorías: Open Source

What our research taught us about scaling open culture

Open Source - Mar, 06/27/2017 - 09:00

In an open organization, culture is critical. In my previous column, I explained how and why we’re on a journey at Red Hat to scale our open culture into the future. Today, I want to share the findings of the first phase of our "Scaling Our Culture For The Future" project, where we decided to gather and analyze some data, and at the same time, engage every Red Hatter in a company-wide conversation about our culture.

Culture is everyone's job

As we look to scale our open culture into the future, we wanted to know how Red Hatters would answer questions like:


read more
Categorías: Open Source

Facebook, Microsoft, Twitter and YouTube Announce Formation of the Global Internet Forum to Counter Terrorism

Facebook - Lun, 06/26/2017 - 19:30

Today, Facebook, Microsoft, Twitter and YouTube are announcing the formation of the Global Internet Forum to Counter Terrorism, which will help us continue to make our hosted consumer services hostile to terrorists and violent extremists.

The spread of terrorism and violent extremism is a pressing global problem and a critical challenge for us all. We take these issues very seriously, and each of our companies have developed policies and removal practices that enable us to take a hard line against terrorist or violent extremist content on our hosted consumer services. We believe that by working together, sharing the best technological and operational elements of our individual efforts, we can have a greater impact on the threat of terrorist content online.

The new forum builds on initiatives including the EU Internet Forum and the Shared Industry Hash Database; discussions with the UK and other governments; and the conclusions of the recent G7 and European Council meetings. It will formalize and structure existing and future areas of collaboration between our companies and foster cooperation with smaller tech companies, civil society groups and academics, governments and supra-national bodies such as the EU and the UN.  

The scope of our work will evolve over time as we will need to be responsive to the ever-evolving terrorist and extremist tactics. Initially, however, our work will focus on:  

  1. Technological solutions: Our companies will work together to refine and improve existing joint technical work, such as the Shared Industry Hash Database; exchange best practices as we develop and implement new content detection and classification techniques using machine learning; and define standard transparency reporting methods for terrorist content removals.
  2. Research: We will commission research to inform our counter-speech efforts and guide future technical and policy decisions around the removal of terrorist content.
  3. Knowledge-sharing: We will work with counter-terrorism experts including governments, civil society groups, academics and other companies to engage in shared learning about terrorism. And through a joint partnership with the UN Security Council Counter-Terrorism Executive Directorate (UN CTED) and the ICT4Peace Initiative, we are establishing a broad knowledge-sharing network to:
    • Engage with smaller companies: We will help them develop the technology and processes necessary to tackle terrorist and extremist content online.
    • Develop best practices: We already partner with organizations such as the Center for Strategic and International Studies, Anti-Defamation League and Global Network Initiative to identify how best to counter extremism and online hate, while respecting freedom of expression and privacy. We can socialize these best practices, and develop additional shared learnings on topics such as community guideline development, and policy enforcement.
    • Counter-speech: Each of us already has robust counter-speech initiatives in place (e.g., YouTube’s Creators for Change, Jigsaw’s Redirect Method, Facebook’s P2P and OCCI, Microsoft’s partnership with the Institute for Strategic Dialogue for counter-narratives on Bing, Twitter’s global NGO training program). The forum we have established allows us to learn from and contribute to one another’s counter-speech efforts, and discuss how to further empower and train civil society organizations and individuals who may be engaged in similar work and support ongoing efforts such as the Civil society empowerment project (CSEP).

We will be hosting a series of learning workshops in partnership with UN CTED/ICT4Peace in Silicon Valley and around the world to drive these areas of collaboration.

Further information on all of the above initiatives will be shared in due course.

 

Categorías: Redes Sociales

When not to use a JavaScript framework

Open Source - Lun, 06/26/2017 - 09:02

As the internet has evolved, web development has grown well beyond its intended abilities—for good and bad. To smooth over the rougher edges, web developers have invented a plethora of frameworks, both small and not so small. This has been good for developers, because browser fragmentation and standards problems abound, especially for those who want new features in APIs and more unified syntax for those features.


read more
Categorías: Open Source

How Linux and makerspaces can strengthen our social fabric

Open Source - Lun, 06/26/2017 - 09:02

In recent years, we've seen the rise of makerspaces, a new social invention where people with shared interests, especially in STEAM (science, technology, engineering, art, and math), gather to work on projects and share ideas.


read more
Categorías: Open Source

Enter giveaway for a brand new System76 laptop

Open Source - Lun, 06/26/2017 - 09:01

It's been a big year for giveaways here on Opensource.com. We've grown our program to include more new and different products than ever before, and this week we're excited to bring you another first: A laptop giveaway! One lucky reader will have a chance at a brand-new System76 Gazelle laptop with Linux pre-installed.


read more
Categorías: Open Source

18 open source translation tools to localize your project

Open Source - Lun, 06/26/2017 - 09:00

Localization plays a central role in the ability to customize an open source project to suit the needs of users around the world. Besides coding, language translation is one of the main ways people around the world contribute to and engage with open source projects.

There are tools specific to the language services industry (surprised to hear that's a thing?) that enable a smooth localization process with a high level of quality. Categories that localization tools fall into include:


read more
Categorías: Open Source

Firefox Focus for Android, Torvalds reflects on Linux, and more news

Open Source - Sáb, 06/24/2017 - 09:00

In this edition of our open source news roundup, we take a look at open source seeds, the release of Firefox Focus for Android, and more.

Open source news roundup for June 11-24, 2017


read more
Categorías: Open Source

Top 5: Getting started with Python, Ansible to manage PostgreSQL, and more

Open Source - Vie, 06/23/2017 - 19:35

In this week's Top 5, we highlight machine learning, games, DevOps, and more!

Top 5 articles of the week

5. Using open source tools to play Dungeons and Dragons

Joe Kline shares how he uses open source tools to play role-playing games – both in person and online. Create scenarios, develop maps, and do more with tools you know and love.


read more
Categorías: Open Source

Are you a Python coder?

Open Source - Vie, 06/23/2017 - 09:02

It seems like every day I'm coming across a new project written in Python.


read more
Categorías: Open Source

3 mistakes to avoid when learning to code in Python

Open Source - Vie, 06/23/2017 - 09:01

It's never easy to admit when you do things wrong, but making errors is part of any learning process, from learning to walk to learning a new programming language, such as Python.

Here's a list of three things I got wrong when I was learning Python, presented so that newer Python programmers can avoid making the same mistakes. These are errors that either I got away with for a long time or that that created big problems that took hours to solve.

Take heed young coders, some of these mistakes are afternoon wasters!


read more
Categorías: Open Source

A introduction to creating documents in LaTeX

Open Source - Vie, 06/23/2017 - 09:00

LaTeX (pronounced lay-tech) is a method of creating documents using plain text, stylized using markup tags, similar to HTML/CSS or Markdown. LaTeX is most commonly used to create documents for academia, such as academic journals. In LaTeX, the author doesn't stylize the document directly, like in a word processor such as Microsoft Word, LibreOffice Writer, or Apple Pages; instead they write code in plain text that must be compiled to produce a PDF document.


read more
Categorías: Open Source

Our First Communities Summit and New Tools For Group Admins

Facebook - Jue, 06/22/2017 - 18:12

By Kang-Xing Jin, VP, Engineering

Today we hosted our first-ever Facebook Communities Summit in Chicago with hundreds of group admins where we announced new features to support their communities on Facebook.

Mark Zuckerberg kicked off by celebrating the role Groups play in the Facebook community and thanking the group admins who lead them. He also announced a new mission for Facebook that will guide our work over the next decade: Give people the power to build community and bring the world closer together.

An important part of delivering on our new mission is supporting group admins, who are real community leaders on Facebook. We’re adding several new features to help them grow and manage their groups:

  • Group Insights: group admins have told us consistently that having a better understanding of what’s going on in their groups would help them make decisions on how to best support their members. Now, with Group Insights, they’ll be able to see real-time metrics around growth, engagement and membership — such as the number of posts and times that members are most engaged.
  • Membership request filtering: we also hear from admins that admitting new members is one of the most time-consuming things they do. So, we added a way for them to sort and filter membership requests on common categories like gender and location, and then accept or decline all at once.
  • Removed member clean-up: to help keep their communities safe from bad actors, group admins can now remove a person and the content they’ve created within the group, including posts, comments and other people added to the group, in one step.
  • Scheduled posts: group admins and moderators can create and conveniently schedule posts on a specific day and time.
  • Group to group linking: we’re beginning to test group-to-group linking, which allows group admins to recommend similar or related groups to their members. This is just the beginning of ways that we’re helping bring communities and sub-communities closer together.

More than 1 billion people around the world use Groups, and more than 100 million people are members of “meaningful groups.” These are groups that quickly become the most important part of someone’s experience on Facebook. Today we’re setting a goal to help 1 billion people join meaningful communities like these.

In Chicago, we celebrated some of these groups built around local neighborhoods, shared passions and life experiences. For example, some of the groups and admins that attended include:

  • Terri Hendricks, who started Lady Bikers of California so that women who ride motorcycles could connect with each other, meet in real life through group rides, and offer each other both motorcycle-related and personal support. Terri says that when she started riding motorcycles it was rare to see other women who rode and that across the group, there is “nothing that these ladies wouldn’t do for each other.”
  • Matthew Mendoza, who started Affected by Addiction Support Group. The group is a safe space for people who are experiencing or recovering from drug and alcohol addiction, as well as their friends and family, to offer support and share stories.
  • Kenneth Goodwin, minister of Bethel Church in Decatur, Georgia, who uses the Bethel Original Free Will Baptist Church group to post announcements to the local community about everything happening at Bethel. He and the other admins will often share information about events, meeting times for their small group ministries, and live videos of sermons so people who cannot attend can watch from their homes.

We’re inspired by these stories and the hundreds of others we’ve heard from people attending today’s event. We’re planning more events to bring together group admins outside the US and look forward to sharing more details soon.

Categorías: Redes Sociales

A user's guide to links in the Linux filesystem

Open Source - Jue, 06/22/2017 - 09:02

In articles I have written about various aspects of Linux filesystems for Opensource.com, including An introduction to Linux's EXT4 filesystemManaging devices in LinuxAn introduction to Linux filesystems; and A Linux user's guide to Logical Vol


read more
Categorías: Open Source

8 ways to contribute to open source when you have no time

Open Source - Jue, 06/22/2017 - 09:01

One of the most common reasons people give for not contributing (or not contributing more) to open source is a lack of time. I get it; life is challenging, and there are so many priorities vying for your limited attention. So how can you find the time in your busy life to contribute to the open source projects you care about?


read more
Categorías: Open Source

To compete or to collaborate? 4 criteria for making the call

Open Source - Jue, 06/22/2017 - 09:00

In my series on becoming more open, I've written about selecting teammates for an open project, working with people that have different personalities, and encouraging front-line decision-making.


read more
Categorías: Open Source

Giving People More Control Over Their Facebook Profile Picture

Facebook - Jue, 06/22/2017 - 04:00

By Aarati Soman, Product Manager

Part of our goal in building global community is understanding the needs of people who use Facebook in specific countries and how we can better serve them. In India, we’ve heard that people want more control over their profile pictures, and we’ve been working over the past year to understand how we can help.

Today, we are piloting new tools that give people in India more control over who can download and share their profile pictures. In addition, we’re exploring ways people can more easily add designs to profile pictures, which our research has shown helpful in deterring misuse. Based on what we learn from our experience in India, we hope to expand to other countries soon.

Profile pictures are an important part of building community on Facebook because they help people find friends and create meaningful connections. But not everyone feels safe adding a profile picture. In our research with people and safety organizations in India, we’ve heard that some women choose not to share profile pictures that include their faces anywhere on the internet because they’re concerned about what may happen to their photos.

These tools, developed in partnership with Indian safety organizations like Centre for Social Research, Learning Links Foundation, Breakthrough and Youth Ki Awaaz, are designed to give people more control over their experience and help keep them safe online.

New Controls

People in India will start seeing a step-by-step guide to add an optional profile picture guard. When you add this guard:

  • Other people will no longer be able to download, share or send your profile picture in a message on Facebook
  • People you’re not friends with on Facebook won’t be able to tag anyone, including themselves, in your profile picture
  • Where possible, we’ll prevent others from taking a screenshot of your profile picture on Facebook, which is currently available only on Android devices
  • We’ll display a blue border and shield around your profile picture as a visual cue of protection

Deterring Misuse

Based on preliminary tests, we’ve learned that when someone adds an extra design layer to their profile picture, other people are at least 75% less likely to copy that picture.

We partnered with Jessica Singh, an illustrator who took inspiration from traditional Indian textile designs such as bandhani and kantha, to create designs for people to add to their profile picture.

If someone suspects that a picture marked with one of these designs is being misused, they can report it to Facebook and we will use the design to help determine whether it should be removed from our community.

Categorías: Redes Sociales

Páginas

Subscribe to Develop Site agregador