10Pearls

Staff/Senior Data Engineer - ETL/AWS/Python/Apache

Islamabad, Islamabad, Pakistan - Full Time

Company Overview:  
10Pearls is an end-to-end digital technology services partner helping businesses utilize technology as a competitive advantage. We help our customers digitalize their existing business, build innovative new products, and augment their existing teams with high performance team members. Our broad expertise in product management, user experience/design, cloud architecture, software development, data insights and intelligence, cyber security, emerging tech, and quality assurance ensures that we are delivering solutions that address business needs. 10Pearls is proud to have a diverse clientele including large enterprises, SMBs and high growth startups. We work with clients across industries, including healthcare/life sciences, education, energy, communications/media, financial services, and hi-tech. Our many long-term, successful partnerships are built upon trust, integrity and successful delivery and execution.  
Role 
We are seeking a highly skilled and experienced Data Engineer to join our team. The ideal candidate will have 5+ years of experience with a strong background in Python, SQL, Data Pipelines, data modeling, Apache Spark and Snowflake. The role involves designing, building, and maintaining scalable data solutions that support analytics and business decision-making. 
  
Responsibilities 
• Develop, construct, test, and maintain production-grade, scalable data pipelines 
• Design and implement robust data models for analytics and reporting 
• Assemble large, complex data sets that meet functional and non-functional business requirements 
• Improve data reliability, quality, and performance across pipelines 
• Prepare curated datasets for analytics and advanced modeling use cases 
• Identify opportunities to automate data workflows and processes 
• Build and manage data workflows using Apache Airflow 
• Optimize data processing using Apache Spark (batch and/or streaming workloads) 
• Collaborate with Product, Analytics, and Engineering teams to understand evolving business requirements and deliver scalable data solutions. 
• Monitor pipeline health and implement logging, alerting, data quality checks, and performance tuning 
• Apply best practices for version control, CI/CD, and deployment using Git and Docker 
• Design and implement cloud-native data solutions on AWS or GCP following the best practices for cloud platforms 

• Ensure data security, governance, access control, and schema evolution best practices are followed 
  
Requirements 
• Bachelor’s degree in Computer Science, Engineering, or a related field 
• Minimum of 5 years of hands-on experience in data engineering, building production data pipelines. 
• Strong hands-on experience with Python and SQL 
• Proven experience building ELT/ETL pipelines at scale 
• Solid understanding of data modeling concepts including dimensional, star, and analytical schemas 
• Hands-on experience with Apache Spark / PySpark for large-scale data processing 
• Experience with workflow orchestration tools such as Apache Airflow 
• Experience with cloud data warehouses such as Snowflake or BigQuery 
• Hands-on experience building data engineering solutions on cloud platforms (AWS or GCP). 
• Experience using Docker for containerized applications 

• Familiarity with CI/CD pipelines and modern DevOps practices for data platforms 
• Strong problem-solving skills and attention to detail 
• Strong communication skills 

Apply: Staff/Senior Data Engineer - ETL/AWS/Python/Apache
* Required fields
First name*
Last name*
Email address*
Location
Phone number*
Resume*

Attach resume as .pdf, .doc, .docx, .odt, .txt, or .rtf (limit 5MB) or paste resume

Paste your resume here or attach resume file

Human Check*