Location: | Bangalore, Karnataka |
Openings: | 20 |
Salary Range: | 25 LPA - 30 LPA |
Description:
Skills and Requirements Must Have: • 5+ years of experience in programming and handling data of various forms and sizes. • Excellent hands-on experience in SQL • Knowledge in Hadoop (Hive, HBase, Spark) • Good experience working with Python • Knowledge in Data Warehousing and Data Mart • Data Modelling knowledge • Hands on experience on ETL mechanisms • Analytical thinking • Basic Visualization knowledge • Any Cloud Exposure Good to Have: • Exposure to AWS Stack • Knowledge on distributed system • Experience handling overall projects/migrations from scratch. Responsibilities: • Build systems that collect, manage, and convert raw data into usable information. • Ensure the data is collected and managed to cater clients’ requirements as well as Data science team to build AI models. • Develop data pipelines from various internal and external sources and build structure for previously unstructured data • Develop and operate modern data architecture approaches to meet key business objectives and provide end-to-end data solutions • Design & develop data management and data persistence solutions for application use cases leveraging relational, non-relational databases and enhancing our data processing capabilities • Develop POCs to influence platform architects, product managers and software engineers to validate solution proposals and migrate.