Experience
2022 — Now
2022 — Now
New York, United States
● Owned and executed the technical vision for a cloud-native, event-driven infrastructure platform, bridging on-prem robotics systems with AWS (ECS, Lambda, API Gateway, DynamoDB); scaled to 400K+ devices and 25K+ deployments, delivering $3M+ in operational efficiency.
● Architected a distributed Guardrails & Network Launch Validation (NLV) platform, enabling policy-driven, automated compliance across 200K+ devices with 1M–2M daily checks; reduced high-risk failures from 80% to <3% and eliminated multimillion-dollar launch regressions.
● Built a horizontally scalable execution framework (ECS/Fargate, multi-threaded/multi-executor) to process million-scale distributed deployments/day, solving for concurrency, throttling, latency, and regional scheduling constraints.
● Designed a resilient control-plane architecture (API Gateway → microservices → workers → event/state systems) with strong observability (CloudWatch, EventBridge, SNS), ensuring high availability, auditability, and controlled blast radius for Tier-0 systems.
● Built and deployed a company-wide RAG-based AI assistant accessible through multiple interfaces. Handled 500K+ queries this year, improving how teams find information.
● Designed and operated a real-time syslog ingestion and processing platform consuming 10M+ events/day, built on a distributed streaming and storage architecture; layered autonomous AI agents for detection, root-cause analysis, and P0–P3 priority classification atop this pipeline, triggering context-aware auto-remediation workflows — reducing manual triage by 40% and improving MTTR significantly.
● Mentored 20+ engineers, enabling five promotions through structured career guidance, technical coaching, and performance feedback.
● Spearheaded knowledge-sharing initiatives by conducting 15+ product-focused sessions for 2,000+ attendees, increasing cross-team technical competency and alignment.
2020 — 2022
2020 — 2022
New York, United States
• Lead a team of software developers to build end to end data platform solution on AWS including data pipelines, data quality, monitoring tools and data transformation layer
• Managed communication with executives and higher management to provide the digital solutions to automate the business processes
• Automated the data pipelines using TeamCity CI/CD tool and Airflow to automate and streamline business process
• Led proof of concepts to compare data warehousing technologies (Redshift and Snowflake) and provided the recommendations
• Implemented the derived store solution on Snowflake and migrated old processes from Legacy databases
• Redesigned stored procedures using DBT open-source technology. This assisted in building an advance and easy to maintain data transformation layer
• Designed and developed self-service Kafka Streaming pipelines using AWS MSK cluster to drive streaming of data from S3 to Snowflake
• Designed and implemented self-service data transformation and data contribution layer that allow external BU teams to setup their data pipelines
• Designed and implemented self-service Analytics layer using open-source technology Metabase. This help users in different BU to self-explore the data and build dashboards to drive analytics use-cases
• Developed streamlit Apps to build interactive data science applications
2018 — 2020
2018 — 2020
Jersey City, New Jersey
• Extensive experience in building big data pipelines for machine learning
• Designed and automated the end-to-end distributed Machine Learning Data Pipelines that involve more than 1Tb+ data processing for Customer Journey Analytics
• Implemented the big data designs using Spark for several use cases that involved processing of billions of rows on 300 nodes cluster environment
• Implemented several optimizations in the spark environment that reduced the overall runtime more than 50%
• Automated the data pipelines and created web APIs to feed Customer behavioural analytics dashboard
• Built several UDFs in Scala for machine learning pipelines involving algorithms like Statespace, GBM, Random Forest, XGB, SVM and KMeans
2016 — 2018
2016 — 2018
Greater New York City Area
• Lead meetings with the business and IT stakeholders to gather requirements and finalize use cases
• Lead multiple projects to design, develop and implement technology platforms to deliver solutions to these business requirements.
• Built Big Data pipelines and machine learning models for Encounter Tracking Application and Risk Analytics that can reduce overall cost by almost 30% (3M). Developed predictive models to classify high risk Vs low-risk consumers.
• Performed POC in Hadoop environment and built machine learning models to provide actionable insights for clinical and finance departments use-cases.
• Design and build dashboards in Tableau based on stakeholder’s requirements to enable daily monitoring of data flow, load and performance of encounter tracking application
2015 — 2015
2015 — 2015
Greater New York City Area
Designed the Analytics architecture for the organization and proposed the technology stacks for implementation of analytics using data lake concept under Hadoop
Built big data pipelines using sqoop, spark, hive, and hdfs.
Built machine learning models using python to categorize the high-risk patient's buckets for better clinical care strategy
Education
Syracuse University
Master of Science (MS)
2014 — 2016
Bharati Vidyapeeth
Bachelor of Engineering (BE)
2006 — 2010