Job description

Requirements

  • Entry level
  • No Education
  • Salary to negotiate
  • Hyderabad

Description

Position Description


Primary Responsibilities:


- Be involved in PIG scripts for handling business transformations
- Work on creating the RDDs, DataFrames for the required input data and performed the data transformations using Spark Core


Required Qualifications:


- B.Tech/MCA


- Experience on Spark Core, Scala and creating RDD to perform aggregations, grouping, etc. in Spark


- Experience on creating schema to data in Spark and performing SQL operations using SparkSQL


- Experience on importing large dataset transfer between RDBMS to HDFS using Sqoop


- Experience in using different file formats: Parquet, Avro, ORC, RCFile, etc.


- Experience on streaming tools like Flume and knowledge on Kafka


- Hands-on experience on Integrating Hive with Spark to perform HQL in Spark


- Hands-on experience on Hive, PIG, Hadoop, MapReduce framework


- Good experience in database object design, creating Tables, Views and Indexes


- Extensive experience in Hadoop/Spark Framework and its ecosystems like HDFS, MapReduce, Pig, Sqoop, Hive, Flume, HBase, Spark, Cassandra and HUE


- Expertise knowledge on Hadoop framework


- In-depth understanding of HDFS and MapReduce architecture, Programming Language C, Java, Scala, Python, PL/SQL, Web Technologies HTML, JavaScript, JSE Technologies JDBC, JEE Technologies, Servlets, JSP, Framework Bootstrap, Hibernate, Spring, Databases Oracle, MySQL, Servers Tomcat, WebLogic, Logging Tool Log 4j, Version Control Tool SVN, Build Tool Maven, IDEs Eclipse, NetBeans, Webstorm, Pycharm, Operating Systems Window, Ubuntu


- Worked extensively with Hive DDLs and Hive Query language (HQLs)


- Worked extensively with CDH4, CDH5


- Good in Core Java application development


- Familiarity with the common computing environment (Linux)


- Has a good Business Exposure with clients


- Basic knowledge on Scala, Python and Java


- Good knowledge on Spark components like Spark Core, Spark SQL, Spark Streaming, Spark MLib and GraphX


- Good knowledge on JDBC, Servlet, JSP


- Good knowledge on Spring framework, ORM tool Hibernate, Restful Web Services


- Good knowledge on Java and Scala Design Pattern


- Good knowledge in Statistics and Mathematics


- Excellent communication and inter-personal skills


Careers with Optum. Here's the idea. We built an entire organization around one giant objective; make the health system work better for everyone. So when it comes to how we use the world's large accumulation of health-related information, or guide health and lifestyle choices or manage pharmacy benefits for millions, our first goal is to leap beyond the status quo and uncover new ways to serve. Optum, part of the UnitedHealth Group family of businesses, brings together some of the greatest minds and most advanced ideas on where health care has to go in order to reach its fullest potential. For you, that means working on high performance teams against sophisticated challenges that matter. Optum, incredible ideas in one incredible company and a singular opportunity to do your life's best work.(sm)


Job Keywords: Software Engineering Analyst, Software Engineer, SE, Computer Programmer, Computer Programming, Web Developer, Web Development, Software Developer, Software Development, RPA, UIPath, Hive, PIG, Hadoop, MapReduce, HQL, Spark, SparkSQL, Parquet, Avro, ORC, RCFile, RDBMS, HDFS, HBase, Cassandra, Hibernate, Oracle, MySQL, Tomcat, WebLogic, SVN, Maven, NetBeans, Webstorm, Pycharm, Ubuntu, CDH4, CDH5, Linux, Python, Java, JSP, JDBC, MLib, GraphX, Hyderabad, Telangana