Ranking: 6
We are building Intel Data Store (IntelDS), a global Data Lake for enterprise data. It is a Big Data platform fully hosted on AWS and connected today to more than 40 data sources. The job purpose is to support the big data engineering team building and improving IntelDS by: • Connecting new sources to enrich the data scope of the platform • Design and develop new features based on consumer application requests to ingest data in the different layers of IntelDS • Automate the integration and delivery of data objects and data pipelines Duties and responsibilities The duties and responsibilities of this job are to make prepare data and make it available in an efficient and optimized format for consumer analytics, BI, or data science applications. It requires to work with current technologies used by IntelDS and in particular Spark, Presto, and RedShift on AWS environment. This includes: • Design and develop new data ingestion patterns into IntelDS raw and/or unified data layers based on the requirements and needs for connecting new data sources or for building new data objects. Working in ingestion patterns allow to automate the data pipelines. • Participate to and apply DevSecOps practices by automating the integration and delivery of data pipelines in a cloud environment. This can include the design and implementation of end-to-end data integration tests and/or CICD pipelines. • Analyse existing data models, identify and implement performance optimizations for data ingestion and data consumption. The objective is to accelerate data availability within the platform and to consumer applications. Target technologies are Apache Spark, Apache Presto and RedShift. • Support client applications in connecting and consuming data from the platform, and ensure they follow our guidelines and best practices. • Participate in the monitoring of the platform and debugging of detected issues and bugs. Qualifications Prior experience as data engineer with proven experience on Big Data and Data Lakes on a cloud environment. Qualifications include: • Proven experience working with data pipelines / ETL / BI regardless of the technology • Proven experience working with AWS including at least 4 of: RedShift, S3, EMR, Cloud Formation, DynamoDB, RDS, lambda • Big Data technologies and distributed systems: one of Spark, Presto or Hive • Python language: scripting and object oriented • Fluency in SQL for datawarehousing (RedShift in particular is a plus) • Familiar with GIT, Linux, CI/CD pipelines is a plus • Autonomous, agile, takes the initiative and team player
We are building Intel Data Store (IntelDS), a global Data Lake for enterprise data. It is a Big Data platform fully hosted on AWS and connected today to more than 40 data sources. The job purpose is to support the big data engineering team building and improving IntelDS by: • Connecting new sources to enrich the data scope of the platform • Design and develop new features based on consumer application requests to ingest data in the different layers of IntelDS • Automate the integration and delivery of data objects and data pipelines Duties and responsibilities The duties and responsibilities of this job are to make prepare data and make it available in an efficient and optimized format for consumer analytics, BI, or data science applications. It requires to work with current technologies used by IntelDS and in particular Spark, Presto, and RedShift on AWS environment. This includes: • Design and develop new data ingestion patterns into IntelDS raw and/or unified data layers based on the requirements and needs for connecting new data sources or for building new data objects. Working in ingestion patterns allow to automate the data pipelines. • Participate to and apply DevSecOps practices by automating the integration and delivery of data pipelines in a cloud environment. This can include the design and implementation of end-to-end data integration tests and/or CICD pipelines. • Analyse existing data models, identify and implement performance optimizations for data ingestion and data consumption. The objective is to accelerate data availability within the platform and to consumer applications. Target technologies are Apache Spark, Apache Presto and RedShift. • Support client applications in connecting and consuming data from the platform, and ensure they follow our guidelines and best practices. • Participate in the monitoring of the platform and debugging of detected issues and bugs. Qualifications Prior experience as data engineer with proven experience on Big Data and Data Lakes on a cloud environment. Qualifications include: • Proven experience working with data pipelines / ETL / BI regardless of the technology • Proven experience working with AWS including at least 4 of: RedShift, S3, EMR, Cloud Formation, DynamoDB, RDS, lambda • Big Data technologies and distributed systems: one of Spark, Presto or Hive • Python language: scripting and object oriented • Fluency in SQL for datawarehousing (RedShift in particular is a plus) • Familiar with GIT, Linux, CI/CD pipelines is a plus • Autonomous, agile, takes the initiative and team player