Talent League: Top Vacancy
Dear Candidate,
Before submitting your resume, please pay attention to the location – we will not be able to review your resume and provide feedback if you are not (in fact) located in the location of the vacancy.
Thanks for understanding
We are seeking skilled Cloud and ML Infrastructure Engineers to lead the buildout of our AWS foundation and our LLM platform. You will design, implement, and operate services that are scalable, reliable, and secure.
The broad scope means focus areas in LLM/ML Infra and IoT infra are strong bonus points. For ML Infra, build the stack that powers retrieval-augmented generation and application workflows built with frameworks like LangChain. Experience with IoT AWS services is a plus.
You will work closely with other engineers and product management. The ideal candidate is hands-on, comfortable with ambiguity, and excited to build from first principles.
Key Responsibilities
Cloud Infrastructure Setup and Maintenance
Design, provision, and maintain AWS infrastructure using IaC tools such as AWS CDK or Terraform.
Build CI/CD and testing for apps, infra, and ML pipelines using GitHub Actions, CodeBuild, and CodePipeline.
Operate secure networking with VPCs, PrivateLink, and VPC endpoints. Manage IAM, KMS, Secrets Manager, and audit logging.
LLM Platform and Runtime
Stand up and operate model endpoints using AWS Bedrock and/or SageMaker; evaluate when to use ECS/EKS, Lambda, or Batch for inference jobs.
Build and maintain application services that call LLMs through clean APIs, with streaming, batching, and backoff strategies.
Implement prompt and tool execution flows with LangChain or similar, including agent tools and function calling.
RAG Data Systems and Vector Search
Design chunking and embedding pipelines for documents, time series, and multimedia. Orchestrate with Step Functions or Airflow.
Operate vector search using OpenSearch Serverless, Aurora PostgreSQL with pgvector, or Pinecone. Tune recall, latency, and cost.
Build and maintain knowledge bases and data syncs from S3, Aurora, DynamoDB, and external sources.
Evaluation, Observability, and Cost Governance
Create offline and online eval harnesses for prompts, retrievers, and chains. Track quality, latency, and regression risk.
Instrument model and app telemetry with CloudWatch and OpenTelemetry. Build token usage and cost dashboards with budgets and alerts.
Add guardrails, rate limits, fallbacks, and provider routing for resilience.
Safety, Privacy, and Compliance
Implement PII detection and redaction, access controls, content filters, and human-in-the-loop review where needed.
Use Bedrock Guardrails or policy services to enforce safety standards. Maintain audit trails for regulated environments.
Data Pipeline Construction
Build ingestion and processing pipelines for structured, unstructured, and multimedia data. Ensure integrity, lineage, and cataloging with Glue and Lake Formation.
Optimize bulk data movement and storage in S3, Glacier, and tiered storage. Use Athena for ad-hoc analysis.
IoT Deployment Management
Manage infrastructure that deploys to and communicates with edge devices. Support secure messaging, identity, and over-the-air updates.
Analytics and Application Support
Partner with product and application teams to integrate retrieval services, embeddings, and LLM chains into user-facing features.
Provide expert troubleshooting for cloud and ML services with an emphasis on uptime and performance.
Performance Optimization
Tune retrieval quality, context window use, and caching with Redis or Bedrock Knowledge Bases.
Optimize inference with model selection, quantization where applicable, GPU/CPU instance choices, and autoscaling strategies.
What Will Make You Successful:
- End-to-End Ownership: Drives work from design through production, including on-call and continuous improvement.
- LLM Systems Experience: Shipped or operated LLM-powered applications in production. Familiar with RAG design, prompt versioning, and chain orchestration using LangChain or similar.
- AWS Depth: Strong with core AWS services such as VPC, IAM, KMS, CloudWatch, S3, ECS/EKS, Lambda, Step Functions, Bedrock, and SageMaker.
- Data Engineering Skills: Comfortable building ingestion and transformation pipelines in Python. Familiar with Glue, Athena, and event-driven patterns using EventBridge and SQS.
- Security Mindset: Applies least privilege, secrets management, network isolation, and compliance practices appropriate to sensitive data.
- Evaluation and Metrics: Uses quantitative evals, A/B testing, and live metrics to guide improvements.
- Clear Communication: Explains tradeoffs and aligns partners across product, security, and application engineering.
Bonus Points:
- 4+ years working with serverless or container platforms on AWS.
- Experience with vector databases, OpenSearch, or pgvector at scale.
- Hands-on with Bedrock Guardrails, Knowledge Bases, or custom policy engines.
- Familiarity with GPU workloads, Triton Inference Server, or TensorRT-LLM.
- Experience with big data tools for large-scale processing and search.
- Background in aviation data or other safety-critical domains.
- DevOps or DevSecOps experience automating CI/CD for ML and app services.
Required qualifications:
- Scope: Independently delivers features and subsystems.
- Contributions: Builds CI/CD pipelines, deploys ML endpoints (Bedrock, SageMaker), develops RAG pipelines and vector search integrations, manages infra security (IAM, KMS).
- Requirements: 3–5 yrs in cloud/infra/ML systems, hands-on with AWS services, experience with APIs, data pipelines, and at least one ML/LLM integration.
Location:
- This is a hybrid role and requires working from our San Carlos, CA office at least three days a week, with the option to work remotely the remaining days.
Dear Candidate,
In an era of rapid technological advancement and the constant evolution of artificial intelligence, at Zazmiс, we believe in the importance of analyzing resumes not only through automated tools but also through interaction with a live recruiter. We value an individualized approach to each candidate and strive to make the hiring process more friendly and efficient.
Understanding the significance of your time and that of our colleagues, we offer you the opportunity to provide additional information that will help us better understand your profile and its alignment with the job description. Your initiative will assist us in making a more informed decision when considering your candidacy.
Please note that Zazmiс reserves the right not to respond to a candidate’s application if we conclude that the candidate does not meet our requirements for any reason. Please understand this as part of our commitment to an efficient and fair hiring process.
Thank you for your understanding and participation in our recruitment process.
Best regards,
The Zazmiс Team
Apply
Dear Candidate,
Before submitting your resume, please pay attention to the location – we will not be able to review your resume and provide feedback if you are not (in fact) located in the location of the vacancy.
Thanks for understanding
We are seeking a Senior BI Developer to lead the design, development, and optimization of scalable business intelligence solutions. This role will be responsible for building robust reporting products, semantic data models, and analytics capabilities that enable data-driven decision-making across the organization. The ideal candidate will bring deep technical expertise, a strong understanding of BI architecture, and a passion for delivering high-impact data solutions.
Position Duties and Responsibilities:
- Design and deliver advanced BI solutions including the development of reporting products, dashboards, data ingestion, transformation, semantic modeling, and visualization.
- Develop and maintain reporting products within a standard software development lifecycle framework in support of operational, financial, and strategic decision-making, with a focus on scalability, usability, and performance.
- Apply UI/UX best practices and data storytelling techniques to create intuitive, insight-driven reporting experiences that enable decision-making.
- Build and optimize relational data models and semantic layers to support governed self-service analytics.
- Implement and manage BI governance frameworks, including workspace management, row-level security (RLS), sensitivity labeling, and lifecycle strategies.
- Demonstrate high attention to detail, quality, and accuracy in all deliverables, instilling confidence in data products and enabling trusted decision-making across the organization.
- Apply advanced analytics techniques, including time intelligence, AI visuals, and what-if scenario modeling to enhance insight generation.
- Drive automation and performance optimization through incremental refresh, deployment pipelines, and DAX tuning.
- Collaborate cross-functionally with data engineers, analysts, and business stakeholders to translate requirements into robust BI solutions.
- Standardize BI development practices and promote adoption of best-in-class tools and methodologies.
- Serve as a subject matter expert in BI tooling and architecture, providing mentorship to junior developers and analysts.
- Performs other duties as assigned.
Knowledge, Skills, and Abilities:
- Excellent analytical and problem-solving skills
- Ability to work collaboratively across multiple functions (Operations/Finance/IT/Rev Cycle)
- Ability to function within a software development lifecycle, Agile/Scrum/Kanban framework
- Highly motivated self-starter who is an excellent team player
- Ability to understand business use cases and apply technical solutions
- Ability to identify the right visualizations to enable insights
- Proven ability to work well independently and proactively provide solutions to complex problems
- Outstanding organizational and communication (both verbal and written) skills
Required qualifications:
- Bachelor’s degree in Information Technology, Computer Science, Engineering, Business or a related field
- 7+ years of experience in BI development, with a strong focus on Power BI.
- Expert-level proficiency in DAX, Power Query (M), and SQL.
- Experience with Azure Data Services and Databricks.
- Experience with the MS Power Platform including Power Automate and Power Apps.
- Strong understanding of data modeling, ETL processes, views, and semantic layer design.
- Proven ability to manage BI governance and security protocols.
- Excellent communication and stakeholder engagement skills.
- Any equivalent combination of education and experience
Dear Candidate,
In an era of rapid technological advancement and the constant evolution of artificial intelligence, at Zazmiс, we believe in the importance of analyzing resumes not only through automated tools but also through interaction with a live recruiter. We value an individualized approach to each candidate and strive to make the hiring process more friendly and efficient.
Understanding the significance of your time and that of our colleagues, we offer you the opportunity to provide additional information that will help us better understand your profile and its alignment with the job description. Your initiative will assist us in making a more informed decision when considering your candidacy.
Please note that Zazmiс reserves the right not to respond to a candidate’s application if we conclude that the candidate does not meet our requirements for any reason. Please understand this as part of our commitment to an efficient and fair hiring process.
Thank you for your understanding and participation in our recruitment process.
Best regards,
The Zazmiс Team
Apply
Dear Candidate,
Before submitting your resume, please pay attention to the location – we will not be able to review your resume and provide feedback if you are not (in fact) located in the location of the vacancy.
Thanks for understanding
Full Stack Developer (Python/React with AWS)
We’re looking for an autonomous Full Stack Developer with strong Python/React skills and AWS cloud experience to take on development tasks within an existing code repository.
About the Role
You’ll join our team to handle the full development lifecycle, from design to deployment, for high-traffic web applications. We need someone who can independently tackle complex challenges, actively contribute to requirements discussions, and propose effective technical solutions.
What You'll Do:
- Develop and maintain high-traffic web applications using Python and React.
- Design and implement REST APIs.
- Integrate with both internal and third-party APIs (e.g., Stripe).
- Translate business requirements into technical specifications.
- Work extensively with the AWS cloud infrastructure.
- Participate in the entire development lifecycle, including testing and deployment.
What We're Looking For:
- 5+ years of experience developing Python/React applications in AWS infrastructure
- Python
- React / TypeScript
- AWS (CloudFormation / SAM)
- AppSync
- Cognito
- Lambda
- Postgres
- GORM
- NManage Agents
- LangGraph
- OpenAI
Why join to us:
- Work Anywhere: Embrace the freedom to work from anywhere in the world. Your office could be a beach, a cozy café, or wherever you feel most inspired
- Flexibility: Wave goodbye to the 9-to-5 grind. We believe in a flexible working schedule that fits your life.
- Sponsored Education: We're invested in your growth. Enjoy sponsored education and training, ranging up to 50%.
- Personal Development: We're not just about work; we're about your growth. Craft your personal development plan and watch your career soar.
- Regular Salary Reviews: Your hard work won't go unnoticed. We conduct regular salary reviews to ensure you're fairly rewarded
- Career Advancement: The sky's the limit! Move up the ladder based on your performance, and your career trajectory could surprise you
- Corporate Events: From team outings to memorable celebrations, we know how to have a good time together
- English Classes: Enhance your language skills and open doors to global opportunities with our sponsored English classes
- Health Matters: Your health is our priority. Get your annual flu shot on us
- Work Equipment: We provide top-notch tools. Receive a compensation of $600 for your work equipment
- Paid vacation, sick leaves
Dear Candidate,
In an era of rapid technological advancement and the constant evolution of artificial intelligence, at Zazmiс, we believe in the importance of analyzing resumes not only through automated tools but also through interaction with a live recruiter. We value an individualized approach to each candidate and strive to make the hiring process more friendly and efficient.
Understanding the significance of your time and that of our colleagues, we offer you the opportunity to provide additional information that will help us better understand your profile and its alignment with the job description. Your initiative will assist us in making a more informed decision when considering your candidacy.
Please note that Zazmiс reserves the right not to respond to a candidate’s application if we conclude that the candidate does not meet our requirements for any reason. Please understand this as part of our commitment to an efficient and fair hiring process.
Thank you for your understanding and participation in our recruitment process.
Best regards,
The Zazmiс Team