A SWEAP task is a real-world GitHub problem packaged with a testing environment and solution (i.e., the golden patch). This environment and solution are used to train an agent to solve the problem and verify the solution in the testing environment. A contributor’s job is to verify the testing environment based on testing logs, categorize the problem by specificity and knowledge areas, and write additional notes to help an agent understand what a good solution looks like based on the code edited in “the golden patch” and notes on GitHub.
Accepted Locations
We accept applicants from the US, Canada, and most countries in LATAM and Europe. We are also accepting candidates from some countries in Africa and Asia. For the complete list of accepted locations, click here. This work is 100% remote.
Loom Video
Our Founder/CEO, Gabe Greenberg, created an in-depth Loom video that we highly recommend you watch! Check it out here: Loom Video
Overview
Join our expert annotation team to create training data for the world's most advanced AI models. No previous AI experience is necessary. You'll get your foot in the door with one of the most prominent players in the AI/LLM space today. We're seeking contributors with professional software engineering experience on production repositories and experience building and maintaining large-scale coding repositories. Projects typically include discrete, highly variable problems that involve engaging with these models as they learn to code. We currently have 200+ roles open!
What Will I Be Doing?
Verifying the testing environment based on testing logs.
Categorizing problems by specificity and knowledge areas.
Writing additional notes to help an agent understand what a good solution looks like based on the code edited in “the golden patch” and notes on GitHub.
Evaluating the quality of AI-generated code, including human-readable summaries of your rationale.
Solving coding problems and writing functional and efficient code in various programming languages.
Writing robust test cases to confirm code works efficiently and effectively.
Creating instructions to help others and reviewing code before it goes into the model.
Engaging in a variety of projects, from evaluating code snippets to developing full applications using chatbots.
Pay Rates
Compensation rates average at $30/hr and can go up to $50/hr. Expectations are 15+ hours per week; however, there is no upper limit. You can work as much as you want and will be paid weekly per hour of work done on the platform.
Contract Length
This is a long-term contract with no end date. We expect to have work for the next 2 years. You can end the contract at any time, but we hope you will com
Discovering Direct IT Contract Opportunities for Contract Spy members.