About The Position
We are looking for Data Engineer to join our data science team! Get ready to work with everything connected to data: expanding and optimizing our data pipeline architecture as well as data flows, building and improving our data systems and collecting data for cross-functional teams.
Come work in our Ramat Hahayal office and catch an exceptional opportunity to tackle multiple challenging projects while leveraging data sources to come up with new and innovative ideas!
What’s the Job?
- Create and maintain optimal data pipeline architecture
- Assemble large, complex data sets that meet both functional & non-functional business requirements
- Identify, design, and implement improvements in internal processes: manual processes automation, data delivery optimizations, infrastructure re-designs for greater scalability, etc.
- Use SQL and big data cloud technologies to build the infrastructure required for optimal loading, extraction, and transformation of data from a wide variety of sources
- Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics
- Work with data teams to assist with data-related technical issues and support their needs associated with data infrastructure
- Create data tools for analytics and help our team of data scientists in building and turning our products into the most progressive ones in the industry
- Work with data and analytics experts, striving to improve our data systems functionality
- 1+ years of experience developing in Big Data . Experience with big data tools such as Hive, Spark, Kafka, etc .
- AWS cloud services experience: EC2, EMR, RDS, Airflow, Athena, Glue, Redshift, Kinesis, etc.
- 3+ years of experience developing MS SQL Server solutions, Azure SQL, or other relational databases. Snowflake, Amazon Red Shift, Google Big Query
- 2+ years of experience with a ETL tool such as SQL Server Integration Services (SSIS), Informatica. Experience building processes that support data transformations, data structures, dependencies, workload management, and metadata .
- 1+ years Experience with object-functional/ object-oriented scripting languages: Python
- Experience with agile development (Scrum, Kanban).
- Ability to take ownership and facilitate consensus among a diverse group of stakeholders.
- Must be willing to collaborate and problem solve in an open team environment.
- Flexible and adaptable in regard to learning and understanding new technologies.
- Highly self-motivated and directed.
- Experience supporting and working with cross-functional teams in a dynamic environment
- B.Sc. or higher degree in Computer Science, Statistics, Informatics, Information Systems, or another quantitative fieldCreate and maintain optimal data pipeline architecture
- Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
- 1+ years Experience with object-oriented languages: Java, C++, Scala, etc.
- Experience with ML models (classification, clustering, decision tree-based methods)
- Experience with stream processing systems: Storm, Spark-Streaming, etc.
- Experience with Tableau, Microsoft Reporting Services (SSRS), or Excel.
Webpals Group is a leading digital performance publisher. We deliver the most relevant and valuable users to global online businesses. Our most talented technology and digital marketing experts are committed to driving exceptional performance.
You’ll be joining our Tech Department – the tech nerds who work on challenging projects, which become cutting edge data-driven content, ML/AI aimed for personalized customer journeys, data pipelines platform and more.