TeamViewer Global

Launched in 2005, TeamViewer focuses on cloud-based technologies to enable online support and collaborate in real time across the globe. These astonishing numbers have led 90% of Fortune 500 companies to rely on TeamViewer as their choice to bring colleagues together across all platforms and all devices. With ITbrain (an integrated management platform with remote monitoring, asset tracking, and anti-malware features) and Monitis (a cloud-based, agentless monitoring solution for websites, servers, and applications), TeamViewer has expanded its portfolio with successful technologies that enable IT professionals to more quickly manage, collaborate, and enable their infrastructure and users across the globe.

TeamViewer Global Berlin, Germany
Full time
Manage all big data pipelines that supply statistics showing our product usage. Maintain API interfaces responsible for pull&push data from&to BI data store. *Responsibilities: * Work in a business oriented data function and manage all big data pipelines of our product Design, develop, deploy and contribute to development of data analytical and self-learning applications scaling to terabytes of data Install, configure and upgrade distributions of Apache Spark and Kafka Build fast and scalable data pipelines between our big data platforms and our data warehouse Work on real Big Data problems with development teams on the product that has 1.7+ billion of clients all over the world *What we offer: * It’s all about the team: become part of a community that values respect, support and open feedback Salary structure that reflects market standards and increases based on your skills and experience Plus, additional bonuses that reward your excellent work performance and contribution based on the success of the company Of course, we offer you perks like free fruit, drinks and a variety of health and well-being activities Good work-life balance with home office options and flexible working hours We truly live and celebrate our cultural diversity: our colleagues come from more than 60 countries and speak more than 40 languages *What you bring: * University degree in computer science, mathematics, statistics or information systems or equivalent experience with a focus on Big Data At least 2 years of relevant professional Big Data experience Excellent programming skills in Scala and Java, experience with Python is beneficial. Good experience in Apache Spark, Kafka is a plus Previous hands on contact with AWS cloud solutions. Proven track record of working with HDFS, MapReduce, Hive is recommended. Background maintaining clustered Data Warehouses such as Redshift is advantageous. Fluency in English paired with great communication skills round up your profile. Job Type: Full-time