Skip to content
Go to homepage

site

  • About
  • Our Team
  • Job Openings
  • Contact

This site uses cookies to improve the user experience! Would you like to allow cookies?

Cookie Settings

These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work.

These cookies help us understand and improve the use and performance of our services including what links visitors clicked on the most, and how they interact with the various areas and features on our website and apps.

Data Engineer/Python Developer

Job #: 25-05180
Pay Rate: Not Specified
Job type: contractor
Location: Minnesota, RI
Apply Now Back to Search
• Develop, optimize, and maintain ETL/ELT pipelines using PySpark and SQL.
• Work with structured and unstructured data to build scalable data solutions.
• Write efficient and scalable PySpark scripts for data transformation and processing.
• Optimize SQL queries, stored procedures, and indexing strategies to enhance performance.
• Design and implement data models, schemas, and partitioning strategies for large-scale datasets.
• Collaborate with Data Scientists, Analysts, and other Engineers to integrate data workflows.
• Ensure data quality, validation, and consistency in data pipelines.
• Implement error handling, logging, and monitoring for data pipelines.
• Work with cloud platforms (AWS, Azure, or GCP) for data processing and storage.
• Optimize data pipelines for cost efficiency and performance.
Technical Skills Required:
✅ Strong experience in Python for data engineering tasks.
✅ Proficiency in PySpark for large-scale data processing.
✅ Deep understanding of SQL (Joins, Window Functions, CTEs, Query Optimization).
✅ Experience in ETL/ELT development using Spark and SQL.
✅ Experience with cloud data services (AWS Glue, Databricks, Azure Synapse, GCP BigQuery).
✅ Familiarity with orchestration tools (Airflow, Apache Oozie).
✅ Experience with data warehousing (Snowflake, Redshift, BigQuery).
✅ Understanding of performance tuning in PySpark and SQL.
✅ Familiarity with version control (Git) and CI/CD pipelines.
Apply Now Back to Search
Go to corporate home page
Copyright © 2026 TechDigital Group
  • linkedin
  • facebook
Monster Strategic Talent Solutions