Big Data Engineer – Scala, Hadoop and AWS – (4-10 Years) Multiple Locations
We are looking for professional having more than 4 years of experience in Spark, Scala or Python. Someone who have hand on experience in Hadoop or Spark, Linux environment and Hive scripting is the ideal fit for the job.
#JAVA #Hadoop #Spark #Linux #JAVA #Python #C++ #Scala
Location: Bangalore, Mumbai, Gurgaon and Pune.
Your Future Employer: One of the top multinational AI company.
Responsibilities:
- Evaluating, developing, maintaining and testing big data solutions for advanced analytics projects.
- Involving big data pre-processing & reporting workflows including collecting, parsing, managing, analysing and visualizing large sets of data to turn information into business insights
- Testing various machine learning models on Big Data, and deploying learned models for ongoing scoring and prediction.
Requirements:
- B.E/B.Tech in Computer Science or related technical degree.
- 3 to 10 years of demonstrable experience designing technological solutions to complex data problems
- Scala or Python/Pysark expertise
- Distributed computing frameworks (Hadoop Ecosystem & Spark components)
- Cloud computing platforms (AWS/Azure/GCP)
- Linux environment, SQL and Shell scripting
What is in Store for you:
- A stimulating working environment with equal employment opportunity.
- Growing of skills while working with industry leaders and top brands.
- A meritocratic culture with great career progression.
Reach us: If you feel that you are the right fit for the role please share your updated CV at isha.joshi@crescendogroup.in