Description This ETL Developer(Engineer II) position in Network Business Intelligence - EngineeringApplications (NBI-Apps) will be working in a fun, challenging, fast-pacedenvironment to develop Extract-Transform-Load (ETL) processes which enable the engineeringarm of Windstream to function more efficiently and effectively.u00a0We arelooking for an ETL developer responsible for implementing the programmaticcollection and consolidation of data from Windstream systems into anEngineering department RDBMS. Examples of the data categories included includenetwork topology and performance, OSS, financial, parts/purchasing, andbilling. All major vendors of RDBMS systems are in use at Windstream. Yourprimary focus will be development of Extract-Transform-Load logic usingCloverETL, Python scripting, and Hadoop data integration and processingpackages in addition to migration of legacy scripted solutions to theseparadigms. Also included will bedatabase development in DDL/DML (primarily Oracle), and operational support ofthe ETL software infrastructure and processes. On top of the Oracle databasedevelopment skills and the ETL tool experience, experience with softwaredevelopment is essential. Job Responsibilities: Development of Extract-Transform-Load logic using CloverETL and Python languages and systems to support business intelligence needs. Migration of legacy scripted solutions to our newer ecosystem of tools (CloverETL, Python). Database development in DDL and DML (primarily Oracle). Building reusable code and libraries for future use. Manage work through Agile tools/methodology, collaborative repositories, issue tracking platforms, and wikis. Manage projects through to completion. Effective communications in person and using JIRA, Confluence, email, and chat tools. Effective collaboration in a dynamic team environment. Independent project execution with minimal oversight. Essential Skills: Extract-Transform-Load methodologies and patterns. Oracle database development including SQL, DDL, and DML. Javlin CloverETL development and deployment. Experience with comparable ETL tools (Informatica, Alteryx, MS DTS) will be considered. Programming in the Bash and Python languages. Experience with comparable languages (Perl, TCL, NodeJS) will be considered. Proficiency with code versioning tools, such as Git. Data retrieval from files, web-based APIs, and RDBMS (Oracle, MySQL, MsSQL). Experience working with large, disparate data sets. Web Service technologies and APIs (REST, RPC, SOAP, etc.) Data exchange formats: delimited, fixed-format, XML, JSON, and YAML. Drive to succeed and improve personally, and in ability to add value to the role, team, and company. Self-starter, relentlessly curious, resourceful, collaborative, and inventive. Good team player and communicator. Highly organized and meticulous. Positive attitude and the desire to solve problems in elegant and creative ways. Desired Skills: Apache Hadoop platform experience u2013 Ambari, Pig, Hive, Hbase, Spark, etc. Database warehousing and performance tuning experience helpful. Java development experience. Familiarity with command line operating systems and shells (Linux, Cisco IOS). Network programming concepts: IPv4, sockets, SSL, port-forwarding. Unix/Linux administration. User experience with JIRA and Confluence. Tableau visualization experience. Qualifications Minimum Requirements foru00a0Education and Experience: College degree and 2-4 years related experience or 6 years equivalent related Engineering/Technical experience or a combination of education and related Engineering/Technical experience required.Desired Qualifications: Ability to stay current with the economical and technological developments in the industry.