Distribuir contenido | Develop Site

Bing Tests Lock Icon In New Search Snippet Location

Search Engine Roundtable - Mié, 04/24/2024 - 13:11
Microsoft is testing moving the lock icon in the Bing Search results snippets from the left of the URL to the right of the URL. Well, most of the time you don't even see a lock but when Bing is testing it, Bing has been testing different lock locations.
Categorías: SEO

Get Seven Iconic MS Office Programs For Just $30

TechRepublic - Mié, 04/24/2024 - 12:00
This bundle gives you lifetime access to 2019 versions of Excel, Word, Outlook, PowerPoint, Access, Publisher and One Note with no subscription or license fees.
Categorías: Tecnologia

Learn Business Best Practices in This Comprehensive MBA Bundle While it’s Just $45

TechRepublic - Mié, 04/24/2024 - 11:10
Now you can get a quicker, more affordable MBA education without going back to school.
Categorías: Tecnologia

Daily Search Forum Recap: April 23, 2024

Search Engine Roundtable - Mar, 04/23/2024 - 22:00
Here is a recap of what happened in the search forums today...
Categorías: SEO

8 AI Business Trends in 2024, According to Stanford Researchers

TechRepublic - Mar, 04/23/2024 - 21:41
TechRepublic digs into the business implications of artificial intelligence trends highlighted in Stanford’s AI Index Report, with help from co-authors Robi Rahman and Anka Reuel.
Categorías: Tecnologia

What Is Mobile CRM?

TechRepublic - Mar, 04/23/2024 - 21:41
Learn the top features offered in mobile CRM applications, plus how to use them, their benefits and which providers offer mobile apps.
Categorías: Tecnologia

Artificial Intelligence: Cheat Sheet

TechRepublic - Mar, 04/23/2024 - 19:25
Discover the potential of artificial intelligence with our comprehensive cheat sheet. Learn more about the concepts, platforms and applications of AI.
Categorías: Tecnologia

3 Simple Ways to Find Your Windows 10 Product Key

TechRepublic - Mar, 04/23/2024 - 18:30
Have you lost your Windows 10 product key? You can find it listed in the operating system with a little know-how and a few simple commands.
Categorías: Tecnologia

7 Best Payroll Outsourcing Companies for 2024

TechRepublic - Mar, 04/23/2024 - 18:18
Need some help with running payroll or other HR functions? Get the lowdown on outsourced payroll and HR options for businesses in 2024.
Categorías: Tecnologia

The 7 Best HR Outsourcing Services

TechRepublic - Mar, 04/23/2024 - 18:15
HR outsourcing can be a great way to free up time and resources. Discover the six best HR outsourcing services and learn how to choose the best provider for your business needs.
Categorías: Tecnologia

UKG Pro Review (2024): Pricing, Features, Pros and Cons

TechRepublic - Mar, 04/23/2024 - 18:05
This advanced yet user-friendly platform is beautiful, but beginners and those with inflexible budgets will struggle.
Categorías: Tecnologia

New Ray-Ban | Meta Smart Glasses Styles and Meta AI Updates

Facebook - Mar, 04/23/2024 - 17:34

Our second-generation smart glasses, in partnership with EssilorLuxottica, have been flying off the shelves — they’re selling out faster than we can make them. And just in time for sunglasses season, we’re expanding the Ray-Ban Meta smart glasses collection with new styles designed to fit more face shapes so you can find the perfect pair. We’re also adding new features, including updates to Meta AI, to make the glasses even more useful.

A Frame for Every Face

Looking for a vintage vibe? Our new Skyler frames feature a cat eye design inspired by an era of iconic jet-set style, designed to suit smaller faces. 

We’re also adding a new low bridge option for our Headliner frames. If your glasses tend to slide down your nose, sit too low on your face or press on your cheeks, this is the fit for you. 

Click to view slideshow.

There are hundreds of different custom frame and lens combinations on the Ray-Ban Remix platform, so you can mix and match to make the glasses your own on ray-ban.com. And our new styles are designed to be prescription lens compatible. Skyler and the new Headliner low bridge fit are available for pre-order now on meta.com and ray-ban.com. These new styles are available in 15 countries, including the US, Canada, Australia, and throughout Europe.

We’re also introducing the first limited-edition Ray-Ban Meta smart glasses in an exclusive Scuderia Ferrari colorway for Miami 2024. Ray-Ban Meta for Scuderia Limited Edition brings together the legacy of Ferrari, timeless Ray-Ban design and cutting-edge tech from Meta, available April 24, 2024.

Share Your View on a Video Call

From a breathtaking vista on a hike to experiencing your kid’s first steps, there are some moments in life that are just meant to be shared. That’s why we’re adding the ability to share your view on a video call via WhatsApp and Messenger, completely hands-free. 

Click to view slideshow.

At the grocery store and not sure which brand of kombucha to buy? Can’t tell if that pineapple is ripe? Now you can hop on a video call with your mom and get her advice based on what you see. Video calling on Ray-Ban Meta smart glasses is rolling out gradually, so if you don’t see the update right away, sit tight — it’s on the way!

Meta AI Makes Your Smart Glasses Even Smarter

From integrated audio to an ultra-wide 12 MP camera, Ray-Ban Meta smart glasses are jam-packed with tech. And in the US and Canada, you also get Meta AI — an intelligent assistant that helps you get things done, create and connect with the people and things you care about. Just say, “Hey Meta,” and ask away! You can control the glasses using voice commands, and thanks to Meta AI, even get access to real-time information.

We started testing a multimodal AI update in December, so you can ask your glasses about what you’re seeing, and they’ll give you smart, helpful answers or suggestions. That means you can do more with your glasses because now they can see what you see. Starting today, we’re rolling this functionality out to all Ray-Ban Meta smart glasses in the US and Canada in beta. 

Say you’re traveling and trying to read a menu in French. Your smart glasses can use their built-in camera and Meta AI to translate the text for you, giving you the info you need without having to pull out your phone or stare at a screen. 

Click to view slideshow.

The post New Ray-Ban | Meta Smart Glasses Styles and Meta AI Updates appeared first on Meta.

Categorías: Redes Sociales

How to Connect an Apple Wireless Keyboard to Windows 10 and Windows 11

TechRepublic - Mar, 04/23/2024 - 17:25
While it may seem almost obscene to some, you can actually connect an Apple Magic keyboard to a Windows 10 and Windows 11 machine. Here's how.
Categorías: Tecnologia

Meta Joins Thorn and Industry Partners in New Generative AI Principles

Facebook - Mar, 04/23/2024 - 15:00

At Meta, we’ve spent over a decade working to keep people safe online. In that time, we’ve developed numerous tools and features to help prevent and combat potential harm – and as predators have adapted to try and evade our protections, we’ve continued to adapt too. 

We’re excited about the opportunities that generative AI technology can bring, but we also want to make sure that innovation and safety go hand in hand. That’s why we take steps to build our generative AI features and models responsibly. For example, we conduct extensive red teaming exercises in areas like child exploitation with experts and address vulnerabilities we found. 

Now Meta is joining Thorn, All Tech is Human and other leading tech companies in an effort to prevent the misuse of gen AI tools to perpetrate child exploitation. Alongside our industry partners, Meta commits to the below Safety by Design principles from Thorn and All Tech is Human, to be applied as appropriate, and will provide updates on our progress. These principles will inform how we develop gen AI technology at Meta to help ensure we mitigate potential risks from the start. 

DEVELOP: Develop, build and train generative AI models that  proactively address child safety risks.

  • Responsibly source our training datasets, and safeguard them from child sexual abuse material (CSAM) and child sexual exploitation material (CSEM):    This is essential to helping prevent generative models from producing AI-generated (AIG) CSAM and CSEM. The presence of CSAM and CSEM in training datasets for generative models is one avenue in which these models are able to reproduce this type of abusive content. For some models, their compositional generalization capabilities further allow them to combine concepts (e.g. adult sexual content and non-sexual depictions of children) to then produce AIG-CSAM. We are committed to avoiding or mitigating training data with a known risk of containing CSAM and CSEM. We are committed to detecting and removing CSAM and CSEM from our training data, and reporting any confirmed CSAM to the relevant authorities. We are committed to addressing the risk of creating AIG-CSAM that is posed by having depictions of children alongside adult sexual content in our video, image and audio generation training datasets.
  • Incorporate feedback loops and iterative stress-testing strategies in our development process: Continuous learning and testing to understand a model’s capabilities to produce abusive content is key in effectively combating the adversarial misuse of these models downstream. If we don’t stress test our models for these capabilities, bad actors will do so regardless. We are committed to conducting structured, scalable and consistent stress testing of our models throughout the development process for their capability to produce AIG-CSAM and CSEM within the bounds of law, and integrating these findings back into model training and development to improve safety assurance for our generative AI products and systems.
  • Employ content provenance with adversarial misuse in mind: Bad actors use generative AI to create AIG-CSAM. This content is photorealistic, and can be produced at scale. Victim identification is already a needle in the haystack problem for law enforcement: sifting through huge amounts of content to find the child in active harm’s way. The expanding prevalence of AIG-CSAM is growing that haystack even further. Content provenance solutions that can be used to reliably discern whether content is AI-generated will be crucial to effectively respond to AIG-CSAM. We are committed to developing state of the art media provenance or detection solutions for our tools that generate images and videos. We are committed to deploying solutions to address adversarial misuse, such as considering incorporating watermarking or other techniques that embed signals imperceptibly in the content as part of the image and video generation process, as technically feasible.

DEPLOY: Release and distribute generative AI models after they have been trained and evaluated for child safety, providing protections throughout the process.

  • Safeguard our generative AI products and services from abusive content and conduct:  Our generative AI products and services empower our users to create and explore new horizons. These same users deserve to have that space of creation be free from fraud and abuse. We are committed to combating and responding to abusive content (CSAM, AIG-CSAM and CSEM) throughout our generative AI systems, and incorporating prevention efforts. Our users’ voices are key, and we are committed to incorporating user reporting or feedback options to empower these users to build freely on our platforms.
  • Responsibly host models: As our models continue to achieve new capabilities and creative heights, a wide variety of deployment mechanisms manifests both opportunity and risk. Safety by design must encompass not just how our model is trained, but how our model is hosted. We are committed to responsible hosting of our first-party generative models, assessing them e.g. via red teaming or phased deployment for their potential to generate AIG-CSAM and CSEM, and implementing mitigations before hosting. We are also committed to responsibly hosting third party models in a way that minimizes the hosting of models that generate AIG-CSAM. We will ensure we have clear rules and policies around the prohibition of models that generate child safety violative content.
  • Encourage developer ownership in safety by design: Developer creativity is the lifeblood of progress. This progress must come paired with a culture of ownership and responsibility. We encourage developer ownership in safety by design. We will endeavor to provide information about our models, including a child safety section detailing steps taken to avoid the downstream misuse of the model to further sexual harms against children. We are committed to supporting the developer ecosystem in their efforts to address child safety risks.

MAINTAIN: Maintain model and platform safety by continuing to actively understand and respond to child safety risks.

  • Prevent our services from scaling access to harmful tools: Bad actors have built models specifically to produce AIG-CSAM, in some cases targeting specific children to produce AIG-CSAM depicting their likeness. They also have built services that are used to “nudify” content of children, creating new AIG-CSAM. This is a severe violation of children’s rights. We are committed to removing from our platforms and search results these models and services. [This principle only applies to search engines and public-facing third party model providers.]
  • Invest in research and future technology solutions: Combating child sexual abuse online is an ever-evolving threat, as bad actors adopt new technologies in their efforts. Effectively combating the misuse of generative AI to further child sexual abuse will require continued research to stay up to date with new harm vectors and threats. For example, new technology to protect user content from AI manipulation will be important to protecting children from online sexual abuse and exploitation. We are committed to investing in relevant research and technology development to address the use of generative AI for online child sexual abuse and exploitation. We will continuously seek to understand how our platforms, products and models are potentially being abused by bad actors. We are committed to maintaining the quality of our mitigations to meet and overcome the new avenues of misuse that may materialize. 
  • Fight CSAM, AIG-CSAM and CSEM on our platforms: We are committed to fighting CSAM online and preventing our platforms from being used to create, store, solicit or distribute this material. As new threat vectors emerge, we are committed to meeting this moment. We are committed to detecting and removing child safety violative content on our platforms. We are committed to disallowing and combating CSAM, AIG-CSAM and CSEM on our platforms, and combating fraudulent uses of generative AI to sexually harm children.

The post Meta Joins Thorn and Industry Partners in New Generative AI Principles appeared first on Meta.

Categorías: Redes Sociales

monday.com vs. Jira: Which Suits Best for Your Team?

TechRepublic - Mar, 04/23/2024 - 15:00
monday or Jira? Find out which project management software solution is better and see how they stack up against each other.
Categorías: Tecnologia

Django security releases issued: 5.0.2, 4.2.10, and 3.2.24

Django - Mar, 02/06/2024 - 15:55

In accordance with our security release policy, the Django team is issuing Django 5.0.2, Django 4.2.10, and Django 3.2.24. These releases address the security issue detailed below. We encourage all users of Django to upgrade as soon as possible.

CVE-2024-24680: Potential denial-of-service in intcomma template filter

The intcomma template filter was subject to a potential denial-of-service attack when used with very long strings.

Thanks Seokchan Yoon for the report.

This issue has severity "moderate" according to the Django security policy.

Affected supported versions
  • Django main branch
  • Django 5.0
  • Django 4.2
  • Django 3.2
Resolution

Patches to resolve the issue have been applied to Django's main branch and the 5.0, 4.2, and 3.2 stable branches. The patches may be obtained from the following changesets:

The following releases have been issued:

The PGP key ID used for this release is Natalia Bidart: 2EE82A8D9470983E

General notes regarding security reporting

As always, we ask that potential security issues be reported via private email to security@djangoproject.com, and not via Django's Trac instance, nor via the Django Forum, nor via the django-developers list. Please see our security policies for further information.

Categorías: Programacion

DSF calls for applicants for a Django Fellow

Django - Vie, 01/19/2024 - 20:18

After five years as part of the Django Fellowship program, Mariusz Felisiak has let us know that he will be stepping down as a Django Fellow in March 2024 to explore other things. Mariusz has made an extraordinary impact as a Django Fellow and has been a critical part of the Django community.

The Django Software Foundation and the wider Django community are grateful for his service and assistance.

The Fellowship program was started in 2014 as a way to dedicate high-quality and consistent resources to the maintenance of Django. As Django has matured, the DSF has been able to fundraise and earmark funds for this vital role. As a result, the DSF currently supports two Fellows - Mariusz Felisiak and Natalia Bidart. With the departure of Mariusz, the Django Software Foundation is announcing a call for Django Fellow applications. The new Fellow will work alongside Natalia.

The position of Fellow is focused on maintenance and community support - the work that benefits most from constant, guaranteed attention rather than volunteer-only efforts. In particular, the duties include:

  • Answering contributor questions on Forum and the django-developers mailing list
  • Helping new Django contributors land patches and learn our philosophy
  • Monitoring the security@djangoproject.com email alias and ensuring security issues are acknowledged and responded to promptly
  • Fixing release blockers and helping to ensure timely releases
  • Fixing severe bugs and helping to backport fixes to these and security issues
  • Reviewing and merging pull requests
  • Triaging tickets on Trac

Being a Django contributor isn't a prerequisite for this position — we can help get you up to speed. We'll consider applications from anyone with a proven history of working with either the Django community or another similar open-source community. Geographical location isn't important either - we have several methods of remote communication and coordination that we can use depending on the timezone difference to the supervising members of Django.

If you're interested in applying for the position, please email us at fellowship-committee@djangoproject.com describing why you would be a good fit along with details of your relevant experience and community involvement. Also, please include your preferred hourly rate and when you'd like to start working. Lastly, please include at least one recommendation.

Applicants will be evaluated based on the following criteria:

  • Details of Django and/or other open-source contributions
  • Details of community support in general
  • Understanding of the position
  • Clarity, formality, and precision of communications
  • Strength of recommendation(s)

Applications will be open until 1200 AoE, February 16, 2024, with the expectation that the successful candidate will be notified no later than March 1, 2024.

Categorías: Programacion

DjangoCon Europe 2025 Call for Proposals

Django - Lun, 01/15/2024 - 17:14

DjangoCon Europe 2024 will be held June 5th-9th in Vigo, Spain but we're already looking ahead to the 2025 conference. Could your town - or your football stadium, circus tent, private island or city hall - host this wonderful community event?

Hosting a DjangoCon is an ambitious undertaking. It's hard work, but each year it has been successfully run by a team of community volunteers, not all of whom have had previous experience - more important is enthusiasm, organizational skills, the ability to plan and manage budgets, time and people - and plenty of time to invest in the project.

How to apply

We've set up a working group of previous DjangoCon Europe organizers that you can reach out to with questions about organizing and running a DjangoCon Europe. european-organizers-support@djangoproject.com. There will also be an informational session set up towards the end of January or early February for interested organizers. Please email the working group to express interest in participating.

In order to give people the chance to go to many different conferences DjangoCon Europe should be held between January 5 and April 15 2025. Please read the licensing agreement the selected organizers will need to sign for the specific requirements around hosting a DjangoCon Europe

If you're interested, we'd love to hear from you. This year we are going to do rolling reviews of applications, in order to hopefully give more time and certainty to the selected proposal to start planning. The board will begin evaluating proposals on February 20th. The selection will be made at any time between February 20th and May 31st. The DSF Board will communicate when a selection has been made and the application process is complete. IF you are interested in organizing it is in your best interest to get a good proposal in early.

Following the established tradition, the selected hosts will be publicly announced at this year's DjangoCon Europe by the current organizers.

The more detailed and complete your proposal, the better. Things you should consider, and that we'd like to know about, are:

  • dates Ideally between early January and mid April 2025
  • numbers of attendees
  • venue(s)
  • accommodation
  • transport links
  • budgets and ticket prices
  • committee members

We'd like to see:

  • timelines
  • pictures
  • prices
  • draft agreements with providers
  • alternatives you have considered

Email your proposals to djangocon-europe-2025-proposals@djangoproject.com. We look forward to reviewing great proposals that continue the excellence the whole community associates with DjangoCon Europe.

Categorías: Programacion

Páginas