IND - Associate Engineer, Data

0 April 16, 2026 Full Time Workday
IND Associate Engineer, Data - GCC061

We’re determined to make a difference and are proud to be an insurance company that goes well beyond coverages and policies. Working here means having every opportunity to achieve your goals – and to help others accomplish theirs, too. Join our team as we help shape the future.

Key Responsibilities 

  • Data and AI Engineer responsible for Implementing AI data pipelines that bring together structured, semi-structured and unstructured data to support AI and Agentic solutions.  

  • Implement efficient Retrieval-Augmented Generation (RAG) architectures and integration with enterprise data infrastructure.  

  • Build and maintain scalable and robust real-time data streaming pipelines using technologies such as Apache Kafka, AWS Kinesis, Spark streaming, or similar. 

  • Develop data domains and data products for various consumption archetypes including Reporting, Data Science, AI/ML, Analytics etc. 

  • Develop AI-driven systems to improve data capabilities 

  • Ensure the reliability, availability, and scalability of data pipelines and systems through effective monitoring, alerting, and incident management. 

  • Collaborate closely with DevOps and infrastructure teams to ensure seamless deployment, operation, and maintenance of data systems. 

 

 

Required Skills & Experience 

  • Bachelor's degree in computer science, Artificial Intelligence, or related field. 

  • 2 years of data engineering experience including Data solutions, SQL and NoSQL, Snowflake, ETL/ELT tools, CICD, Bigdata, Cloud Technologies (AWS/Google/AZURE), Python/Spark, Datamesh, Datalake or Data Fabric.  

  • Less than 2 years of experience will be considered with advanced degree & applicable internship experience. 

  • 1+ years' experience with cloud platforms (AWS, GCP, or Azure) 

  • 1+ years of data engineering experience focused on supporting AI technologies. 

  • 1+ years implementing AI data solutions. 

  • 1+ years with prompt engineering techniques for large language models. 

  • 1+ years in implementing Retrieval-Augmented Generation (RAG) pipelines, integrating retrieval mechanisms with language models. 

  • 1+ years implementing AI driven data systems supporting agentic solutions (AWS Lambda, S3, EC2, Langchain, Langgraph). 

  • 1+ years of programming skills in Python  

  • 1+ years with building AI pipelines that bring together structured, semi-structured and unstructured data.   

  • 1+ years in vector databases, graph databases, NoSQL, Document DBs, including design, implementation, and optimization. (e.g., AWS open search, GCP Vertex AI, Neo4j, Spanner Graph, Neptune, Mongo, DynamoDB etc.). 

  • Strong written and verbal communication skills 

  • Able to communicate effectively with technical teams   

  • Team player who collaborates effectively across teams   

  • Strong organization and execution skills. 

  • Strong interpersonal and time management skills  

  • Ability to work successfully in a lean, agile, and fast-paced organization, leveraging Agile principles and ways of working.  

  • Ability to translate technical topics into business solutions and strategies 

About Us | Our Culture | What It’s Like to Work Here

Apply on company site

How to Get Hired at Hartford Financial Services

  • The Hartford is a 200+ year-old industry leader in P&C insurance with approximately 19,000 employees, offering stability combined with a forward-looking investment in technology and innovation.
  • All applications are processed through Workday — optimizing your resume for Workday's parser and completing your profile thoroughly are essential steps that directly impact your visibility to recruiters.
Read the full guide

How well do you match this role?

Check My Resume