Job description:
We’re seeking a senior or AI-savvy mid-level engineer who is highly proficient in async Python/FastAPI and has strong Azure cloud expertise.
- Start Date: ASAP
- Contract Duration: 6 months (extension likely)
- Location: Remote, with possible occasional in person team sessions / workshops / gatherings (i.e. 1x quarter) likely to take place in Prague
- Languages: English fluent
- 9-6/10-7 CET possibility of a wider overlap (flexibility) appreciated
You should be comfortable building low-latency APIs, optimizing PostgreSQL at scale, and collaborating closely with Data Science and Frontend teams. Fluency with AI coding assistants is essential for accelerated development in our fast-paced environment. Experience with LLM integration is a plus, but willingness to learn and adapt is equally important. You’ll work across multiple projects with varying architectures (3 active, 3 upcoming), so flexibility, pragmatic problem-solving, and strong collaboration skills are key to success in this role.
Scope:
API Development
- Design and build async FastAPI services with structured logging and low-latency endpoints
- Develop RESTful APIs across multiple microservices (architecture varies by project: 1-6 services)
- Implement WebSocket connections for real-time updates and event-driven patterns
- Optimize database operations with connection pooling (200-pool configurations) and JSONB-aware queries
- Build background task processing systems with retries and idempotency for heavy operations
Infrastructure & Deployment
- Containerize services with Docker and deploy via Helm charts to Azure Kubernetes Service
- Manage environment-driven configuration and execute startup database migrations
- Implement background job scheduling with task schedulers, status tracking, and retry logic
- Optimize caching strategies with Redis for maximum performance
- Configure CORS policies, middleware, and request/response logging
AI/LLM Integration (Nice-to-Have)
- Integrate LangChain and OpenAI APIs for semantic tasks and domain-specific pipelines (valuation, analytics)
- Build document processing systems handling PDF, Excel, and DOCX parsing at scale
- Work with vector databases and Azure AI Search for retrieval-augmented generation (RAG)
- Decouple heavy LLM/document processing from request threads to maintain low P95 latency
- Collaborate with Data Science on prompt engineering, output parsers, and evaluation metrics
Data Architecture:
- Design PostgreSQL schemas with proper indexing, foreign keys, and multi-tenant data isolation
- Integrate Azure Blob Storage for document workflows and large file handling
- Implement complex business logic with transactional guarantees and ACID compliance
- Use Cosmos DB connector for specialized NoSQL workloads when appropriate
- Manage database connection lifecycle, pooling, and transaction management
Requirements:
- Experience3-5+ years of backend development experience
- 2-3+ years of production Python/FastAPI experience
- Azure cloud services experience (REQUIRED): Blob Storage, Azure Kubernetes Service (AKS), AI Search
- Strong async programming patterns and PostgreSQL expertise
- Docker containerization and microservices architecture experience
- Fluency with AI coding assistants (REQUIRED): GitHub Copilot, Cursor, or similar tools for accelerated development
Technical Depth:
- Expert-level async/await patterns and non-blocking I/O in Python
- PostgreSQL optimization: complex queries, indexing strategies, connection pooling with asyncpg
- RESTful API design with JWT authentication and CORS configuration
- Database schema management at scale (50+ tables, complex relationships)
- Performance-conscious development (P95 latency optimization, connection pooling strategies)
- Understanding of when to use asyncio vs. multiprocessing for different workload types
Tech Stack - Core: Python 3.11+, FastAPI, async/await patterns
- Data Layer: PostgreSQL, asyncpg, SQLAlchemy 2.0, Alembic migrations
- Azure (Required): Blob Storage, Azure Kubernetes Service, AI Search, Cosmos DB
- Deployment: Docker, Kubernetes, Helm charts
- AI/ML: LangChain, OpenAI APIs, vector databases
- Supporting Tools: Redis caching, Pydantic v2, pytest
Nice-to-Have Skills:
- Hands-on Kubernetes/Helm operations experience beyond deployment
- Prior experience with LangChain, LLM integration, or AI/ML pipeline productionization
- Multi-tenant SaaS architecture patterns and data isolation strategies
- Cosmos DB or other NoSQL database experience
- Experience with task schedulers, job queues (Celery, RQ), or workflow orchestration
- Performance profiling and optimization tools (py-spy, cProfile, pgBadger)
SNI sp. z o.o. will process personal data for the purpose of the recruitment process in accordance with Data Privacy Policy. The data may also be stored and processed for future recruitment purposes, in accordance with the given consent.