Big Data Engineer in Skopje, Macedonia

We are looking for a Big Data Engineer to join our rapidly growing development team in Skopje

Find out more
go back

Symphony – Why So Special?

At Symphony Solutions we have removed all possible barriers created by the traditional organization and embraced the organic principles and a high-degree of self-management. We believe that this kind of organization is the optimal environment to attract and retain the best talents, fully develop them and leverage their potential.
We have a unique employee selection process where colleagues choose colleagues. Such approach eliminates possible conflicts and ensures honest and transparent relationship with clients and within the team. Symphony Solutions is a company that strives to be the Best Price/Performance and the easiest to do business with.
Symphony Solutions in Skopje is currently looking for a Big Data Engineer for a full-time position, who will become a contributor to the business transformation process of the long-term success and growth of the company.

General requirements:

  • Experience building data pipelines in any public cloud (GCP Dataflow, Glue, Azure DataFactory) or any equivalent;
  • Experience writing ETL (Any popular tools);
  • Experience in data modeling, data design and persistence (e.g. warehousing, data marts, data lakes);
  • Strong Knowledge of Big Data architectures and distributed data processing frameworks: Hadoop, Spark, Kafka, Hive;
  • Experience and working knowledge of various development platforms, frameworks and languages such as Java, Python, Scala and SQL;
  • Experience with Apache Airflow, Oozie and Nifi would be great;
  • General knowledge of modern data-center and cloud infrastructure including server hardware, networking and storage;
  • Strong and written and verbal English communication skills.

Nice to have:

  • Experience with BI platforms, reporting tools, data visualization products, ETL engines;
  • Experience with data streaming frameworks;
  • DevOps experience with a good understanding of continuous delivery and deployment patterns and tools (Jenkins, Artifactory, Maven, etc);
  • Experience with Hbase;
  • Experience in data management best practices, real-time and batch data integration, and data rationalization.

Main responsibilities:

  • Working with the Data Architects to implement data pipelines;
  • Working with the Big Data Principal Architects in the development both proof of concepts and complete implementations;
  • Working on complex and varied Big Data projects including tasks such as collecting, parsing, managing, analyzing, and visualizing very large datasets;
  • Translating complex functional and technical requirements into detailed designs;
  • Writing high-performance, reliable and maintainable code;
  • Performing data processing requirements analysis;
  • Performance tuning for batch and real-time data processing;
  • Securing components of clients’ Big Data platforms;
  • Diagnostics and troubleshooting of operational issues;
  • Health-checks and configuration reviews;
  • Data pipelines development – ingestion, transformation, cleansing;
  • Data flow integration with external systems;
  • Integration with data access tools and products;
  • Assisting application developers and advising on efficient data access and manipulations;
  • Defining and implementing efficient operational processes.

We offer:

  • Competitive salary and compensation package:
  • Friendly and professional team;
  • Career and professional growth;
  • Performance reviews twice a year;
  • Great international work environment;
  • Comfortable office facilities;
  • Symphony Training Academy;
  • Low hierarchy and open communication;
  • Casual Fridays, corporate events;
  • Free English courses.

Send us your CV using the form below

Upload CV

Meet our recruiters