The Big Data Engineer builds what the big data architect has designed. Big data engineers develop, maintain, test and evaluate big data solutions within organizations. Most of the time they are also involved in the design of big data solutions, because of the experience they have with Hadoop based technologies such as MapReduce, Hive MongoDB or Cassandra. A big data engineer builds large-scale data processing systems, is an expert in data warehousing solutions and should be able to work with the latest (NoSQL) database technologies.
A big data engineer should have sufficient experience in software engineering before the move can be made to the field of big data. Experience with object-oriented design, coding and testing patterns as well as experience in engineering (commercial or open source) software platforms and large-scale data infrastructures should be present. Big data engineers should also have the capability to architect highly scalable distributed systems, using different open source tools. He or she should understand how algorithms work and have experience building high-performance algorithms.
A big data engineer should embrace the challenge of dealing with petabyte or even exabytes of data on a daily basis. A big data engineer understands how to apply technologies to solve big data problems and to develop innovative big data solutions. In order to be able to do this, the big data engineer should have extensive knowledge in different programming or scripting languages like Java, Linux, Phyton and/or R. Also expert knowledge should be present regarding different (NoSQL or RDBMS) databases such as MongoDB or Redis. Building data processing systems with Hadoop and Hive using Java or Python should be common knowledge to the big data engineer.
Apply wıth Maıl