Python - Zazmic
JUNE 8, 2022 ONLINE CHARITY MEETUP

All you want to know about the Google Cloud Professional Certification

Learn More
Back to list

Dear Candidate,

Before submitting your resume, please pay attention to the location – we will not be able to review your resume and provide feedback if you are not (in fact) located in the location of the vacancy.

Thanks for understanding

Job Title: Senior Data Engineer (Apache Spark, GCP)

Join our dynamic data team, where you’ll have the opportunity to shape the future of our data infrastructure and drive innovation across the organization. We’re looking for a highly skilled Data Engineer who thrives in a fast-paced environment and is passionate about leveraging cutting-edge technologies to deliver impactful solutions.

Role Overview:

As a Senior Data Engineer, you will play a critical role in designing, developing, and optimizing scalable data pipelines using Apache Spark and Google Cloud Platform (GCP). Your expertise will be pivotal in ensuring the quality, reliability, and accessibility of our data, empowering our data scientists and analysts to make data-driven decisions.

Key Responsibilities:

  • Architect, build, and maintain robust, scalable data pipelines utilizing Apache Spark and GCP’s suite of data tools.
  • Lead the implementation and optimization of ETL processes on GCP, with a focus on BigQuery, Dataflow, and Cloud Storage.
  • Collaborate closely with data scientists and analysts to productionize machine learning models and enhance data-driven decision-making.
  • Uphold and improve data quality, reliability, and accessibility across our data ecosystem.
  • Drive data architecture decisions and establish best practices that align with business objectives.
  • Mentor and guide junior engineers, fostering a culture of knowledge sharing and continuous learning within the team.

What We're Looking For::

  • Educational Background: Bachelor’s degree in Computer Science, Engineering, or a related field. A Master’s degree is a plus.
  • Experience: A minimum of 5 years in a Data Engineering role or a similar position, with a proven track record of delivering scalable data solutions.
  • Extensive experience with Apache Spark, including Spark SQL and Spark Streaming.
  • In-depth knowledge of Google Cloud Platform, particularly BigQuery, Dataflow, and Cloud Storage.
  • Proficiency in programming languages such as Python and Java/Scala.
  • Strong understanding of data modeling concepts and techniques.
  • Tools & Processes: Experience with version control systems like Git, and CI/CD pipelines for seamless code deployment.
  • Soft Skills: Strong problem-solving abilities, meticulous attention to detail, and excellent communication skills.

Preferred Qualifications:

  • Familiarity with data governance and security best practices.
  • Experience in real-time data processing and streaming architectures.

Why Join Us:

  • Impact: Work on projects that reach millions of users and contribute to meaningful business outcomes
  • Growth: We offer competitive compensation, comprehensive benefits, and opportunities for professional growth
  • Innovation: Be part of a team that values creativity, embraces new ideas, and works with the latest technologies

Dear Candidate,

In an era of rapid technological advancement and the constant evolution of artificial intelligence, at Zazmiс, we believe in the importance of analyzing resumes not only through automated tools but also through interaction with a live recruiter. We value an individualized approach to each candidate and strive to make the hiring process more friendly and efficient.

Understanding the significance of your time and that of our colleagues, we offer you the opportunity to provide additional information that will help us better understand your profile and its alignment with the job description. Your initiative will assist us in making a more informed decision when considering your candidacy.

Please note that Zazmiс reserves the right not to respond to a candidate’s application if we conclude that the candidate does not meet our requirements for any reason. Please understand this as part of our commitment to an efficient and fair hiring process.
Thank you for your understanding and participation in our recruitment process.

Best regards,
The Zazmiс Team

Apply

    Type:

    English level:

    When are you ready to start?

    What work schedule is comfortable for you (time zone)?

    What experience do you have in this field?

    How many relevant years of experience do you have for this position?

    What education do you have related to this position?

    Do you have any certificates

    What motivates you at work and why do you feel this position fits your professional goals?

    What are your salary and compensation expectations? (in US dollars)

    Do you have any special requirements or preferences for benefits (for example, flexible hours, the ability to work remotely)?

    Do you have LinkedIn account?

    Accompanying text




    Pdf, doc, docx allowed. Max 2mb




    Pdf, doc, docx allowed. Max 2mb

    Back to list

    Dear Candidate,

    Before submitting your resume, please pay attention to the location – we will not be able to review your resume and provide feedback if you are not (in fact) located in the location of the vacancy.

    Thanks for understanding

    This position requires work hours alignment with UK/Lisbon time zone

     

    About the project – №1 Travel platform in the world!

    We believe that we are better together, and we welcome you for who you are. Our workplace is for everyone, as is our people powered platform. At company, we want you to bring your unique perspective and experiences, so we can collectively revolutionize travel and together find the good out there.

    Product is the world’s largest travel site, operates at scale with over 500 million reviews, opinions, photos, and videos reaching over 390 million unique visitors each month.  We are a data driven company that leverages our data to empower our decisions. Product is extremely excited to play a pivotal role in supporting our travelers.

    Our data engineering team is focused on delivering Product’s first-in-class data products that serve all data users across the organization. As a member of the Data Platform Enterprise Services Team, you will collaborate with engineering and business stakeholders to build, optimize, maintain, and secure the full data vertical including tracking instrumentation, information architecture, ETL pipelines, and tooling that provide key analytics insights for business-critical decisions at the highest levels of Product, Finance, Sales, CRM, Marketing, Data Science, and more. All in a dynamic environment of continuously modernizing tech stack including highly scalable architecture, cloud-based infrastructure, and real-time responsiveness.

    Product provides a unique, global work environment that captures the speed, innovation and excitement of a startup, at a thriving, growing and well-established industry brand.

    We take pride in our data engineering and are looking for a talented and highly-motivated engineer with a passion for solving interesting problems to add to our high-performing team.

    What you will do:

    • Provide the organization’s data consumers high quality data sets by data curation, consolidation, and manipulation from a wide variety of large scale (terabyte and growing) sources
    • Build first-class data products and ETL processes that interact with terabytes of data on leading platforms such as Snowflake and BigQuery
    • Partner with our Analytics, Product, CRM, and Marketing teams
    • Be responsible for the data pipelines’ SLA and dependency management
    • Write technical documentation for data solutions, and present at design reviews
    • Solve data pipeline failure events and implement anomaly detection
    • Work with various teams from Data Science, Product owners, to Marketing and software engineers on data solutions and solving technical challenges
    • Mentor junior members of the team

    Who You Are:

    • Bachelor’s degree in Computer Science or related field
    • 6+ years experience in commercial data engineering or software development
    • Experience with Big Data technologies such as Snowflake, Databricks, PySpark
    • Expert level skills in writing and optimizing complex SQL; advanced data exploration skills with proven record of querying and analyzing large datasets
    • Solid experience developing complex ETL processes from concept to implementation to deployment and operations, including SLA definition, performance measurements and monitoring
    • Hands-on knowledge of the modern AWS Data Ecosystem, including AWS S3
    • Experience with relational databases such as Postgres, and with programming languages such as Python and/or Java
    • Knowledge of cloud data warehouse concepts
    • Experience in building and operating data pipelines and products in compliance with the data mesh philosophy would be beneficial. Demonstrated efficiency in treating data, including data lineage, data quality, data observability and data discoverability
    • Excellent verbal and written communication skills. Ability to convey key insights from complex analyses in summarized business terms to non technical stakeholders and also ability to effectively communicate with other technical teams
    • Strong interpersonal skills and the ability to work in a fast-paced and dynamic environment
    • Ability to make progress on projects independently and enthusiasm for solving difficult problems

    Why join us:

    • Ability to work remotely from anywhere in the world
    • Close cooperation with the development team and client
    • Opportunity to influence product development
    • Professional growth: the certification preparation course is free for our specialists. The company pays for two attempts to pass the exam, regardless of the exam result
    • We cover English classes (with a native speaker)
    • Boost professional brand: you can participate in local conferences as a listener or as a speaker
    • Regular team buildings: have fun with teammates
    • Gifts for significant life events (marriage, childbirth)
    • Tech and non-tech Zazmic Communities: support and share experience with each other

    Dear Candidate,

    In an era of rapid technological advancement and the constant evolution of artificial intelligence, at Zazmiс, we believe in the importance of analyzing resumes not only through automated tools but also through interaction with a live recruiter. We value an individualized approach to each candidate and strive to make the hiring process more friendly and efficient.

    Understanding the significance of your time and that of our colleagues, we offer you the opportunity to provide additional information that will help us better understand your profile and its alignment with the job description. Your initiative will assist us in making a more informed decision when considering your candidacy.

    Please note that Zazmiс reserves the right not to respond to a candidate’s application if we conclude that the candidate does not meet our requirements for any reason. Please understand this as part of our commitment to an efficient and fair hiring process.
    Thank you for your understanding and participation in our recruitment process.

    Best regards,
    The Zazmiс Team

    Apply

      Type:

      English level:

      When are you ready to start?

      What work schedule is comfortable for you (time zone)?

      What experience do you have in this field?

      How many relevant years of experience do you have for this position?

      What education do you have related to this position?

      Do you have any certificates

      What motivates you at work and why do you feel this position fits your professional goals?

      What are your salary and compensation expectations? (in US dollars)

      Do you have any special requirements or preferences for benefits (for example, flexible hours, the ability to work remotely)?

      Do you have LinkedIn account?

      Accompanying text




      Pdf, doc, docx allowed. Max 2mb




      Pdf, doc, docx allowed. Max 2mb