Skip navigation

Software Engineer

Posted By Shivika UpadhyayUK(Remote)


Placement Type:



£376 - £419 per day(PAYE Inside IR35)

Start Date:


Contract – 12 months.

The Messenger Well-Being Enforcement & Fairness team’s mission is to manage risky & violating users’ experience on our platform, in a proportional, fair & compliant way.
Our vision for Messenger Well-being is to make Messenger & Instagram Direct the safest & most supportive messaging & calling platforms in the world. Part of this mission requires limiting risky actors & enforcing against violations, in a way that balances safety of the community with fairness & remediation against mistakes.

This is a fast growing space and the team will need to address three key topics in 2024:

  • Unlock a proportional but effective enforcement system for all company-priority messaging experiences (community messaging, genAI, Threads, Monetisation).
  • Messaging has recently grown from just private messaging to a portfolio of experiences (community messaging, genAI characters & creations, soon Threads Inbox, messaging monetisation…).
  • This creates new opportunities to extend our current penalty system but also build new ones, to ensure violations made in one context don’t have disproportionate consequences in other messaging environments.
  • This also creates new questions, like how to stop chats where multiple users are violating, how multiple admins can request an appeal for another admin’s violation or whether a violating genAI content can be attributed back to the user’s prompt or not. Getting these nuances & new concepts right, in a transparent, defensible & efficient way, will be critical to Messenger’s mission & success in the years to come.
  • Build a 0>1 new space around detection, gating & enforcement of risky actors & groups in messaging.
  • Whilst we’ve made progress on gating & soft actions against risky users, many of our definitions rely on a user’s behavior on feed to form an assessment.
  • From Spam, to Scams to more severe harms like Child Safety or Dangerous Orgs, many of our more severe problems come down to a motivated actor.
  • We want to progress our response in this space, gating risky users from key products, feeding back intelligence on possible repeat actors to XI, building new account-level risk signals dedicated to messaging & likely violating communities, triggering profile reviews & enforcement where appropriate.
  • This is net new work that requires strategy & first ML explorations as well as a clear ownership model across Messenger & XI.
  • Build a 0>1 automation of appeals, to improve fairness of our decisions & legitimacy of our approach.
  • Transparency of decision & ease of requesting an appeal has grown quickly in 2023.
  • This will improve fairness of our enforcement systems and prevent critical impact of our mistakes. But, as we do this, appeals are expected to grow 3x in volume over the next year, surpassing our limited human capacity.
  • This creates an interesting 0>1 opportunity towards ranking & automation of review of appeals, working with ML engineers in MWB Review platform & XI.

Minimum qualifications:

  • Proficiency in full stack coding languages (preferably), or backend coding.
  • Experience working on large scale system
  • Experience in independently developing and releasing changes to production.
  • Strong analytical skills.


  • Full stack development or backend coding on some of the key enforcement systems.
  • Create products and features using internal programming language Hack
  • Develop services in a scalable and reliable manner
  • Develop an end-to-end understanding of the system.
  • Successfully completes projects at large scope while maintaining a consistent high level of productivity
  • Preferred qualifications
  • Bachelor’s degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience.

Client Description

Our Client is the largest social media company in the world. They have substantial B2B and B2C advertising and media platforms, as well as a nonprofit initiative. With the mission of bringing people together, they now boast over 2 billion users, and are rapidly developing as they influence the world around us.


Aquent is dedicated to improving inclusivity & is proudly an equal opportunities employer. We encourage applications from under-represented groups & are committed to providing support to applicants with disabilities. We aim to provide reasonable accommodation for any part of the employment process, to those with a medical condition, disability or neurodivergence.