Senior Data Engineer
Domain
Tech Stack
Must-Have Requirements
- ✓Bachelor's degree in Data Science, Data Analytics, Information Management, Computer Science, Information Technology, or related field, or equivalent professional experience
- ✓7+ years overall experience
- ✓3+ years experience with SQL and Python
- ✓3+ years experience implementing data pipelines using modern data architectures
- ✓2+ years experience with data warehouses such as Redshift, BigQuery, Snowflake, or similar
- ✓Experience with open-source based data architectures: Spark, Hive, Trino/Presto or similar
- ✓Excellent software engineering and scripting knowledge
- ✓Strong communication skills
- ✓Expertise with data systems working with massive data volumes
- ✓Ability to lead a team of Data Engineers
Nice to Have
- -Experience working with time series databases
- -Advanced SQL knowledge including stored procedures, triggers, analytic/windowing functions, and tuning
- -Advanced Snowflake knowledge including streams and tasks
- -Background in Big Data, non-relational databases, Machine Learning and Data Mining
- -Experience with cloud-based technologies including SNS, SQS, SES, S3, Lambda, and Glue
- -Experience with modern data platforms like Redshift, Cassandra, DynamoDB, Apache Airflow, Spark, or ElasticSearch
- -Expertise in Data Quality and Data Governance
Description
Are you obsessed with data, partner success, taking action, and changing the game? If you have a whole lot of hustle and a touch of nerd, come work with Pattern! We want you to use your skills to push one of the fastest-growing companies headquartered in the US to the top of the list.
Pattern accelerates brands on global ecommerce marketplaces leveraging proprietary technology and AI. Utilizing more than 46 trillion data points, sophisticated machine learning and AI models, Pattern optimizes and automates all levers of ecommerce growth for global brands, including advertising, content management, logistics and fulfillment, pricing, forecasting and customer service. Hundreds of global brands depend on Pattern’s ecommerce acceleration platform every day to drive profitable revenue growth across 60+ global marketplaces—including Amazon, Walmart.com, Target.com, eBay, Tmall, TikTok Shop, JD, and Mercado Libre. To learn more, visit pattern.com or email press@pattern.com.
Pattern has been named one of the fastest growing tech companies headquartered in North America by Deloitte and one of best-led companies by Inc. We place employee experience at the center of our business model and have been recognized as one of Newsweek’s Global Most Loved Workplaces®.
Pattern values data and the engineering required to take full advantage of it. As a Data Engineer at Pattern, you will be working on business problems that have a huge impact on how the company maintains our competitive edge.
Roles and Responsibilities Develop, deploy, and support automated, scalable real-time and batch data streams from a variety of sources into the lakehouse. Develop and implement data auditing strategies and processes to ensure data quality; identify and resolve problems associated with large scale data processing workflows; implement technical solutions to maintain data pipeline processes and troubleshoot failures Collaborate with technology teams and partners to specify data requirements and provide access to data Tune application and query performance using profiling tools and SQL or other relevant query language Translate business and analytics requirements for data to a comprehensive data model and pipelines Foster data expertise and own data quality for assigned areas of ownership Work with data infrastructure to triage issues and drive to resolution
Required Qualifications Bachelor’s Degree in Data Science, Data Analytics, Information Management, Computer Science, Information Technology, related field, or equivalent professional experience Overall experience should be more than 7 + years 3+ years of experience working with SQL and Python 3+ years of experience in implementing data pipelines using modern data architectures 2+ years of experience working with data warehouses such as Redshift, BigQuery, Snowflake, or similar Experience with open-source based data architectures: Spark, Hive, Trino / Presto or similar Excellent software engineering and scripting knowledge Strong communication skills (both in presentation and comprehension) along with the aptitude for cross-collaboration across data management and analytics domains Expertise with data systems working with massive data volumes from various data sources Ability to lead a team of Data Engineers
Preferred Qualifications
Experience working with time series databases Advanced knowledge of SQL, including the ability to write stored procedures, triggers, analytic/windowing functions, and tuning Advanced knowledge of Snowflake, including the ability to write and orchestrate streams and tasks Background in Big Data, non-relational databases, Machine Learning and Data Mining is a plus Experience with cloud-based technologies including SNS, SQS, SES, S3, Lambda, and Glue Experience with modern data platforms like Redshift, Cassandra, DynamoDB, Apache Airflow, Spark, or ElasticSearch Expertise in Data Quality and Data Governance