Description:
Job DescriptionResponsibilities: Minimum 3 years of experience in build & deployment of Bigdata applications using PySpark 2+ years of Experience with AWS Cloud on data integration with Spark & AWS Glue/EMR In-depth understanding of Spark architecture & distributed systems Good exposure to Spark job optimizations Expertise in handling complex large-scale Big Data environments Able to design, develop, test, deploy, maintain, and improve data integration pipeline Mandatory Skills : 4+ years of exp
Feb 10, 2025;
from:
dice.com