*{

Big Data Engineer - Analytics

Job description

Help us catch bad guys with math.



Our team is growing and we’re looking to add a Big Data Engineer - Analytics that can focus on extending our existing analytics platform and related capabilities to add unprecedented analytics flexibility for our customers. This will include enabling Data Scientists to manipulate and combine events and models to extend and customize the analytics in ways that provide unique value for each customer.


Although there is a lot of uncertainty in the market today, especially considering the COVID-19 crisis, we are set up to accommodate fully remote work, and a fully virtual interview, selection, and onboarding process.


We are looking for someone who is passionate about what they do, takes a creative approach to problem-solving and will be the champion for creating innovative machine learning hooks that deliver real value and perform in big data environments.


Here’s what you'll do:


  • Implement model data flows to support running cutting-edge machine learning techniques on massive amounts of data.
  • Work with product managers and data scientists to turn new features and algorithms into beautiful, battle-tested code.
  • Work with the technologies we use to analyze and identify cyber-security threats for our customers (Elasticsearch, Spark, HBase, Kafka, Vertica, NiFi, using Java and Scala).
  • Work side by side with some of the smartest minds in the fields of machine learning and behavioural analytics.
  • Create efficient and robust cloud-based solutions, leveraging the best in cloud technologies.

Job requirements

In order to be considered, you must have:


  • An undergraduate or Master’s degree in Computer Science or equivalent engineering experience.
  • Strong interest in software design, distributed computing, and databases.
  • Experience developing in a JVM environment (Java, Scala, Clojure).
  • At least two years of experience developing with or using Big Data & Analytics stacks/tools such as Hadoop, HBase, Spark, Presto, and Vertica.
  • Experience implementing and using streaming platforms such as SparkSQL, Flink, Kafka, Storm, etc.
  • Experience with Kubernetes, Docker, Ansible or any other infrastructure or containerization management/automation platform.
  • Familiarity leveraging AWS EMR, Azure, GCP cloud technologies best practices to enable the distribution and analysis of big data on the cloud would be considered an asset.


We’d also love it if you had the following (though not required):


  • Familiarity with data science or machine learning packages (pandas, R, TensorFlow, etc…).
  • Familiarity with virtualization technologies (VMWare ESX, Docker).
  • Contributions to open-source software (code, docs or mailing list posts).
  • Interest in understanding and analyzing diverse types of data.


Interset is an equal opportunity employer. Should you require accommodation in any aspect of our selection process, please contact our recruitment team at hiring (at) interset (dot) com.


About Interset:


We use big data and advanced behavioural analytics to detect and prevent the theft of intellectual property...simply put, we catch bad guys with math. Part of the Micro Focus group of companies, we are a fast-paced, all hands on deck kind of environment where you are respected and listened to from day one. We have a startup feel within the stability and structure of a large global company. We hire people with a wide scope of knowledge and experience that want to jump into self-organizing, cross-functional teams. We manage our own schedules, we support our teammates, and we always make time for fun.