Skip to content

Generative AI Data Analyst - California, USA (Remote)

Welocalize
Californiaremote$36Mar 25, 2026·Posted 17 days ago
View Application Page

Domain

Must-Have Requirements

  • Hands-on experience performing data annotation or evaluation tasks
  • Native or near-native English with excellent writing skills
  • Strong attention to detail and ability to follow guidelines consistently
  • Self-driven and motivated to work on state-of-the-art machine learning tools
  • 4 year accredited college degree or equivalent experience
  • Valid work authorization in the US

Nice to Have

  • -College degree or experience in Linguistics, English Literature, Creative Writing, Journalism
  • -Domain knowledge in Law/Medical/Math/Coding
  • -Experience working in annotation platforms or structured labeling environments
  • -Deep understanding of Large Language Models/RLHF
  • -Experience in labeling/tagging of frames/tasks/prompts for DNN
  • -QA/testing experience

Description

OVERVIEW

We are seeking a Generative AI Analyst to support a high-impact machine learning project. This role focuses on creating high-quality prompts and responses across diverse topics, leading labeling initiatives with internal and external partners, and developing clear guidelines to ensure consistency and accuracy in large language model datasets. The ideal candidate is a strong communicator with native-level U.S. English, experienced in working with data and comfortable training teams on best practices for LLM development. This position is fully remote and suited for someone motivated to work with cutting-edge AI technologies.

Project Details Job Title: Generative AI Analyst Location: Remote Hours: 40 hours weekly Language: English (US) Start date: April 2026 Employment Type: Full-time W-2 employee with benefits – 5 days a week Pay rate: $36/hour

Must have valid work authorization in the US (Welo Data does not sponsor VISAs at this time).

Key Responsibilities

Creatively writing prompts and responses to a variety of diverse topics Perform LLM annotation and evaluation tasks (ranking, scoring, labeling, tagging) Evaluate model outputs for accuracy, relevance, and instruction-following Identify and document issues such as hallucinations and inconsistencies Participating in and/or supporting labeling workflows, including hands-on annotation and collaboration with internal or external teams Training teams on best practices for creating Large Language Models/Data sets

Requirements

Hands-on experience performing data annotation or evaluation tasks (e.g., labeling, ranking, scoring, or tagging LLM outputs) Native or near-native English with excellent writing skills Strong attention to detail and ability to follow guidelines consistently Self-driven, motivated and enthusiastic to work on state-of-art machine learning tools 4 year Accredited College degree or equivalent experience

Ways to stand out from the crowd

College Degree or experience in Linguistics, English Literature, Creative Writing, Journalism, and domain knowledge (Law/Medical/Math/Coding/etc.) Experience working in annotation platforms or structured labeling environments is a plus Deep understanding of Large Language Models/RLHF Experience in labeling/tagging of frames/tasks/prompts to prepare for DNN QA/testing experience