Symphony Anywhere

Senior Data Engineer

Symphony Solutions is a cloud transformation company headquartered in Amsterdam, with offices in the Netherlands, US, Ukraine, Macedonia, and Poland. Symphony Solutions recently celebrated 10 years of continuous service, and we attract many people with our open, welcoming culture and Dutch-inspired environment.

We are a company with a difference as we maintain a strong ethical culture and keep company values at the level of interpersonal and client-oriented relationships. Our philosophy is to establish close and strong relations with every client, employee, and candidate to succeed in our main directions (e.g. PHP, Java, C#, C++, SAP, Salesforce/Force.com, iOS, Android, BlackBerry).

We are looking for a Senior Data Engineer to participate in building cost efficient scalable data lakes for the wide variety of customers, from the small startups to the large enterprises.

Usually, our projects fit into one of the categories (but not limited to them):

  • Collect data from the IoT edge locations, store it in the Data Lake, orchestrate processes to ETL that data, slice it into the various data marts. Then put those data marts into the machine learning or BI pipelines
  • Optimize pre-existing Spark clusters, find the bottlenecks in the code and/or architecture, plan and execute improvements
  • Build a data deliver pipeline to ingest high volume of the real-time streams, detect anomalies, slice into the window analytics, put those results in the NoSQL for the further dashboard consumption

Requirements:

  • Location – Anywhere in Europe
  • Max notice period for candidates – 1 month
  • Hands-on experience designing efficient architectures for high-load enterprise scale applications or ‘big data’ pipelines
  • Practical experience in implementing of micro-services architectures (using Python, Java or Scala stack)
  • Hands-on experience with message queuing, stream processing and highly scalable ‘big data’ stores
  • Advanced knowledge and experience working with SQL and noSQL databases
  • Proven experience in re-design and re-architecting of the large complex business applications
  • Strong self-management and self-organizational skills
  • Successful candidates should have experience with any of the following software/ tools (not all required at the same time):
  • Java/Scala/Python – strong knowledge
  • Graph databases development and optimization Neo4j, SPARQL, GREMLIN, TinkerPop, Pregel, Cypher, Amazon Neptune
  • Big data tools: Kafka, Spark, Hadoop (HDFS3, YARN2,Tez, Hive, HBase)
  • Stream-processing systems: Kinesis Streaming, Spark-Streaming, Kafka Streams, Kinesis Analytics
  • AWS cloud services: EMR, RDS, MSK, Redshift, DocumentDB, Lambda
  • Message queue systems: ActiveMQ, RabbitMQ, AWS SQ
  • We are looking for a candidate with 3-6 years of experience in Data, Cloud or Software Engineer role, who has attained a degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field
  • Valid AWS certificates would be a great plus

Responsibilities:

  • Analyze, scope and estimate tasks, identify technology stack and tools
  • Design and implement optimal architecture and migration plan
  • Develop new and re-architecture solution modules, re-design and re-factor program code
  • Examine performance and advise necessary architecture changes
  • Communicate with client on project-related issues
  • Collaborate with in-house and external development and analytical team

We offer:

  • Medical Insurance
  • Personal Workstation
  • Competitive salary and compensation package
  • Friendly and professional team
  • Symphony Training Academy
  • Low hierarchy and open communication
  • 20 vacation days
  • Private Medical Care
  • See BENEFITS Section for the full line-up

×
pdf | doc | docx allowed, 5mb max

Thank You for Your Submission!

Our recruiter will contact you soon.

Share

Our Recruitment process

Recruitment proccess Recruitment proccess

Check also