Data Engineer

Posted: 6 months ago

Current technology includes (but is not limited to):
" ETL Datastage, Unix Scripting, SQL, PL/SQL, Oracle
" Change Data Capture (CDC)
" Data Ingestion to preparation to exploration and consumption using Cloud & Big Data Platform
" Tableau as Business Intelligence (BI) tools
" Dimensional and Relation table structures
" AWS cloud (S3, EC2, EMR, Redshift, etc.)
" Databricks, Snowflake, Attunity, Airflow
" Mulesoft API Management

Roles & responsibilities may include:
" Integrate data from a variety of data sources (Data warehouse, Data marts) utilizing on-premises or cloud-based data structures;
" Develop and implement streaming, data lake, and analytics big data solutions
" Create integration of data from multiple data sources, knowledge of various ETL techniques and frameworks using Databricks
" Create Applications using Change Data Capture Tools
" Technical Support (includes troubleshooting, monitoring)
" Technical Analyst and Project Management Support
" Application Performance and System Testing Analyst
" Design, develop, deploy and support end to end ETL specifications based on business requirements and processes such as source-to-target data mappings, integration workflows, and load processes using IBM Datastage
" Developing ETL jobs using various stages such as Sequential, Dataset, Transformer, Copy, Lookup, filter, Join, Merge, Funnel, Sort, Remove Duplicates, Modify and Aggregator etc.
" Involve in day to day support and providing solutions/troubleshooting Production Outages/Issues using Datastage tool for business requirements, enhancements and handling service requests


"Ideal candidates will have the following experience, knowledge, skills or abilities:
" Minimum of 5-7 years of IT work experience focused in Data Acquisition and Data Integration using DataStage
" Minimum 4-6 years of experience with ORACLE SQL and PL/SQL Package
" Experience working with flat files and XML transformations
" Analyzing the statistics of the DataStage jobs in director and conducting performance tuning to reduce the time required to load the tables and the loads on computing nodes involved.
" Knowledge/Experience on Data Warehousing applications, directly responsible for the Extraction, Staging, Transformation, Pre Loading and Loading of data from multiple sources into Data Warehouse
" Application development, including Cloud development experience, preferably using AWS (AWS Services, especially S3, API Gateway, Redshift, Lambda, etc.)
" Working with different file formats: Hive, Parquet, CSV, JSON, Avro etc. Compression techniques
" Knowledge of Business Intelligence Tools, Enterprise Reporting, Report Development, data modeling, data warehouse architecture, data warehousing concepts
" Comfortable with AWS cloud (S3, EC2, EMR, Redshift, etc.)
" Ability to collaborate with colleagues across different schools/locations
" Python, Spark and AWS experience is a big plus.

dy>