Job Description
🎯 Key Responsibilities
1️⃣ Machine Learning & Statistical Modeling
- Design and develop:
- Predictive models
- Forecasting solutions
- Optimization algorithms
- Apply statistics and mathematical modeling for business problems.
- Deploy scalable solutions across environments including AWS cloud.
2️⃣ Application & Tool Development
- Build interactive analytics applications using R Shiny.
- Develop APIs and automation using:
- Python REST APIs
- Unix shell scripting
- Create monitoring alerts and event-handling systems.
3️⃣ Data Analytics & Engineering
- Perform deep-dive analytics using:
- Python
- PySpark
- R
- Work across distributed datasets and environments.
4️⃣ Visualization & Reporting
- Develop dashboards using Tableau.
- Translate technical insights into business intelligence outputs.
5️⃣ Collaboration & Agile Delivery
- Work with cross-functional teams (technical + functional stakeholders).
- Track work in Agile frameworks using tools like Jira.
🧠 Technical Stack Summary
Programming & Modeling
- Python
- R
- PySpark
- Statistical modeling & ML
Cloud & Infrastructure
- AWS
- Virtual environments
- REST APIs
- Unix Shell
Visualization
- Tableau
- R Shiny
Workflow
- Agile methodology
- Jira
💼 Role Level Interpretation
Based on responsibilities, this role typically aligns with:
- Mid-level Data Scientist (3–6 years experience)
- Applied Data Scientist / ML Engineer hybrid
- Analytics Platform Developer (in some organizations)
The presence of:
- Monitoring automation
- API development
- Cloud deployment
suggests strong production exposure, which is valuable for career growth.
🚀 Strengths of This Role for Career Growth
This job builds expertise in:
- End-to-end ML lifecycle
- Cloud deployment
- Data apps (R Shiny)
- Distributed computing (PySpark)
- Production monitoring
These skills are highly transferable to:
- Senior Data Scientist
- Machine Learning Engineer
- AI Engineer
- Data Platform roles