Current location - Quotes Website - Signature design - Description of Specific Responsibilities of Big Data Operation and Maintenance Engineer
Description of Specific Responsibilities of Big Data Operation and Maintenance Engineer
The big data operation and maintenance engineer is responsible for the operation and maintenance management, cluster capacity planning, expansion and performance optimization of the company's big data platform. The following is a detailed description of the responsibilities of the big data operation and maintenance engineer I have carefully arranged for you.

Description of Specific Responsibilities of Big Data Operation and Maintenance Engineer 1

Responsibilities:

1, responsible for the operation and maintenance management (deployment, monitoring, optimization and fault handling) of the data platform;

2. Responsible for architecture review, capacity planning and cost optimization of Hadoop/Spark/Flink/Elastic Search/Kafka system;

3. Responsible for user management, authority allocation and resource allocation of the big data platform;

4. Participate in the design of data mining and machine learning platform, and give an executable operation and maintenance scheme;

5. Participate in the development of data platform related tools (including automatic deployment, monitoring, ETL, etc.). );

6, in-depth understanding of data platform architecture, find and solve hidden dangers and performance bottlenecks;

7. Operate and maintain ETL tools, scheduling tools and relational databases.

Qualifications:

1, bachelor degree or above, major in computer software;

2. 1 year or above experience in operation and maintenance of big data related components (Hadoop/yarn/hbase/hive/spark/Kafka, etc.). ), 1 year above CDH or HDP maintenance experience, 3 years above system operation and maintenance experience;

3. Have a deep understanding of Linux system, be able to deploy open source software independently and master more than one scripting language (shell/perl/python, etc.). ), and familiar with python development language is preferred;

4. Strong logical thinking ability, methodical work, strong sense of responsibility, proactive work, strong execution and good sense of teamwork.

Big Data Operation and Maintenance Engineer II Specific Responsibilities Description

accountability

1. Responsible for the big data ETL system to ensure the stable availability of operation and maintenance services;

2. Responsible for data acquisition and exchange scheme and joint debugging and testing;

3. Be responsible for the audit and on-line of acquisition and exchange tasks;

4. Responsible for timely troubleshooting ETL process failures, forming knowledge base and improving operation and maintenance documents;

5. Be responsible for monitoring and optimizing the performance of ETL, and continuously put forward suggestions for improving the automated operation and maintenance platform.

Skill requirements

1. Bachelor degree or above in computer science or related major;

2. Be familiar with Linux system and skillfully write one or more scripting languages of shell/perl/python;

3. Familiar with Hive, hadoop and MapReduce cluster principles, and experience in Hadoop big data platform operation and maintenance is preferred;

4. Familiar with database performance optimization and SQL tuning, with corresponding experience;

5. Strong pressure resistance, strong sense of responsibility, good communication skills, learning ability and teamwork ability.

Description of Specific Responsibilities of Big Data Operation and Maintenance Engineer 3

Responsibilities:

1, responsible for the operation and maintenance development of distributed big data platform products to ensure their high availability and stability;

2. Responsible for operability design, capacity planning and service monitoring of big data system architecture, and continuously optimize service architecture and cluster performance;

3. Control and optimize the cost through technical means, and improve the operation and maintenance efficiency of the big data platform through automated tools and processes;

4. Provide big data technical guidance for project developers and solve technical problems encountered in the application of big data platform;

Qualifications:

1, more than three years working experience in big data operation and maintenance, working experience in large Internet companies is preferred, and full-time bachelor degree or above;

2. Proficient in at least one development language, and experience in Java or Python language development is preferred;

3. Proficient in various tools related to Hadoop ecosystem and high-performance caching and have practical experience, including but not limited to Hadoop, HBase, Hive, Presto, Kafka, Spark, Yarn, Flink, Logstash, Flume, ClickHouse, etc.

4. Familiar with commonly used relational databases such as Mysql, proficient in writing sql statements, and experience in distributed nosql database application and performance tuning is preferred;

5. Familiar with Linux environment and the use of shell scripts;

6. Have a strong interest in big data technology and are interested in developing in the direction of big data;

7. Strong sense of responsibility, execution, service awareness, learning ability and pressure resistance;

8. Good communication skills, initiative and sense of responsibility.

Description of Specific Responsibilities of Big Data Operation and Maintenance Engineer 4

Responsibilities:

1, which is responsible for the daily maintenance, monitoring and exception handling of the big data cluster to ensure the stable operation of the cluster;

2. Responsible for batch management and operation and maintenance of big data;

3. Responsible for user management, authority management, resource management and performance optimization of big data clusters;

4. Deeply understand the data platform architecture, find and solve major failures and performance bottlenecks, and build a first-class data platform;

5. Follow the cutting-edge technology of big data and continuously optimize data clusters;

6. Experience in operation and maintenance of Huawei's big data platform is preferred;

Job requirements:

1 year or above experience in big data operation and development;

2. Have a good computer and network foundation and be familiar with linux file system, kernel, performance tuning, TCP/IP, HTTP and other protocols;

3. Familiar with big data ecology, and have relevant (HDFS, Hive, Hbase, Sqoop, Spark, Flume, Zookeeper, ES, Kafka) operation and development experience;

4. Skillfully use scripting languages such as shell and python to develop related operation and maintenance management tools;

5. Good writing habits of official documents;

Description of Specific Responsibilities of Big Data Operation and Maintenance Engineer 5

Responsibilities:

1, responsible for the construction, task scheduling, monitoring and early warning of big data clusters within the company and projects, and continuously improve the big data platform to ensure stability and security;

2. Responsible for cluster capacity planning, expansion, cluster performance optimization, daily inspection and emergency duty, and participate in the architecture design and improvement of big data infrastructure;

3. Thoroughly study the operation and maintenance technologies related to big data services, and explore new operation and maintenance technologies and development directions.

Requirements:

1, familiar with Linux basic command operation, able to independently write Shell scripts for daily server operation and maintenance;

2. Familiar with the installation and optimization of Hadoop, Kafka, Zookeeper, Hbase and Spark in Hadoop ecosystem;

3. Familiar with hardware and software equipment and network principles, and have rich experience in big data platform deployment, performance optimization and operation and maintenance;

4, serious and responsible work, strong learning ability, hands-on ability and ability to analyze and solve problems;

5. Able to use various open source monitoring tools, operation and maintenance tools, HA and load balancing software to complete tasks;

6. Familiar with JVM virtual machine tuning;