Machine Learning Engineer (Python)

  • Durham
  • Acl Digital
Dynamic Work schedule - This is 5 days on site a month- in the same week then the remainder of the month is working from home. ( Can fly/drive into the office as well).
Must Have:
  • 10+ year of programming (3+ years in Python people who can write in Python) / Fast API or Flask
  • 2+ years of AWS experience in the Engineering space (need to know the Ecosystem)
  • Clear communication skills.
Fidelity TalentSource is your destination for discovering your next temporary role at Fidelity Investments. We are currently sourcing for Senior level Machine Learning Engineers to work in one of our regional locations: Durham, NC; Boston, MA; Jersey City, NJ; Westlake, TX!
The Role
As a Machine Learning Engineer, you will build and maintain large scale Client Infrastructure and Client pipelines. Contribute to building advanced analytics, machine learning platform and tools to enable both prediction and optimization of models. Extend existing Client Platform and frameworks for scaling model training & deployment. Partner closely with various business & engineering teams to drive the adoption, integration of model outputs. This role is a critical element to using the power of Data Science in delivering Fidelity's promise of creating the best customer experiences in financial services.
The Expertise and Skills You Bring
  • Bachelor's or Master's Degree in a technology related field (e.g. Engineering, Computer Science, etc.)
  • 8+ years of proven experience in implementing Big data solutions in data analytics space
  • 2+ years of experience in developing Client infrastructure and MLOps in the Cloud using AWS Sagemaker
  • Extensive experience working with machine learning models with respect to deployment, inference, tuning, and measurement required
  • Experience in Object Oriented Programming (Java, Scala, Python), SQL, Unix scripting or related programming languages and exposure to some of Python's Client ecosystem (numpy, panda, sklearn, tensorflow, etc.)
  • Experience with building data pipelines in getting the data required to build and evaluate Client models, using tools like Apache Spark or other distributed data processing frameworks
  • Data movement technologies (ETL/ELT), Messaging/Streaming Technologies (AWS SQS, Kinesis/Kafka), Relational and NoSQL databases (DynamoDB, EKS, Graph database), API and in-memory technologies
  • Strong knowledge of developing highly scalable distributed systems using Open-source technologies
  • Experience with CI/CD tools (e.g., Jenkins or equivalent), version control (Git), orchestration/DAGs tools (AWS Step Functions, Airflow, Luigi, Kubeflow, or equivalent)
  • Solid experience in Agile methodologies (Kanban and SCRUM)
  • Strong technical design and analysis skills
  • Ability to deal with ambiguity and work in fast paced environment
  • Experience supporting critical applications
  • Familiarity with applied data science methods, feature engineering and machine learning algorithms
  • Data wrangling experience with structured, semi-structured and unstructured data
  • Experience building Client infrastructure, with an eye towards software engineering
  • Excellent communication skills, both through written and verbal channels
  • Excellent collaboration skills to work with multiple teams in the organization
  • Ability to understand and adapt to changing business priorities and technology advancements in Big data and Data Science ecosystem

The Team
PI Data Engineering team (part of Personal Investing Technology BU) is focused on delivery data and Client solutions for the organization. As part of this team, you will be responsible for building advanced analytics solutions using various cloud technologies and collaborating with Data Scientists to robustly scale up Client Models to large volumes in production.