Motivity Labs s Enterprise Architecture organization is building a team of dynamic and influential thought leaders, representing diverse backgrounds and specific focus areas. The Data Engineer role requires experience with the design, architecture and implementation of Modern Data and Analytics Platforms. Responsibilities include collaboratively developing and publishing reference architectures, analysing systems and technologies and influencing the implementation of Motivity Labs’s Modern Data Platform Strategy.
A candidate for this position must have had at least 7 years of working experience working in a data engineering department, preferably as a Data Engineer in a fast-paced environment and complex business setting. The candidate must have a demonstrated experience in building and maintaining reliable and scalable ETL /ELT on big data platforms as well as experience working with varied forms of data infrastructure inclusive of relational databases such as SQL, Hadoop, Spark and column-oriented databases such as Redshift, MySQL, and Vertica.
The candidate must also have had experience in data warehousing inclusive of dimensional modelling concepts and demonstrate proficiency in scripting languages, for example, Python, Perl, and so forth. A suitable candidate will also demonstrate machine learning experience and experience with big data infrastructure including MapReduce, Hive, HDFS, YARN, HBase, Oozie, etc. The candidate will additionally demonstrate substantial experience and a deep knowledge of data mining techniques, relational, and non-relational databases
Experience in designing, building and implementing data pipelines using Azure/AWS/GCP data services like ADF/Glue/Data Flow /Data Fusion/Dataproc/ Databricks etc.,
- A problem-solving mindset with the ability to understand business challenges and how to apply analytics expertise to solve them.
- Build compelling, clear, and powerful visualizations of data.
- Design, architect, and implement Azure Data Factory V2 Pipelines/Glue/Data fusion/Data flow
- Should have worked on Team Foundation Server / GIT as part of ADF Implementation.
- Experience in migration projects like moving/implementing code from another ETL tool to ADF/Glue/Dataflow
- Develop solutions to implement error logging, and alerting mechanisms using various available Azure services.
- A team mentality empowered by the ability to work with a diverse set of individuals.
- Tell a clear story, highlight insights we had not previously seen, and do it all faster than we’ve been able to previously.
- Create, maintain, and document a robust set of metrics to monitor day-to-day bug detection and long-term performance tracking/tuning.
- Have a strong understanding of best practices, standards, and guidelines in each technology/product and apply the same to produce high-quality deliverables
- Should be able to take responsibility for the software life cycle – set up all environment, design, code, test, and repository mgmt. and deployments as applicable and as required
- Dataops/Devops/Devsecops best practices
- MLOPS implementation
- Data Catalog & metadata implementations across the large enterprises
- Cloud data warehouse experience in Snowflake