Big Data-hadoop Resume Sample
Work Experience
- Experience in database development, reporting, and analytics
- Experience in web service or middle tier development of data driven apps
- Willingness to explore new ideas and have a passion to make them happen!
- Experience in data cleansing, schema reconciliation, and related tools (e.g. Uber Paricon)
- API scalability, load balancing, and security (REST, MQTT, JSON, GraphQL, encryption, key management)
- Strong background in data and analytics systems, primarily the Apache Big-Data toolsets (e.g. HBASE, Cassandra, Kafka, and Spark)
- Experience in scalability/failure testing (Simian Army/Chaos Monkey)
- Provide insight to drive business decisions based on online behavioral tracking and, where necessary, the inclusion of offline data, working closely with business teams to identify and explore opportunities to support new client acquisition
- Drive per-channel and cross-channel optimization for digital acquisition, using available data sources to deep dive and troubleshoot digital tactics performance issues and propose methodical optimization tests, both on-site and campaign based
- Represent the “voice of the data” in regards to working with external data solutions providers in cooperation with GCMIO Internet and Marketing teams
- Proven experience in a range of big data architectures and frameworks including Hadoop ecosystem, Java MapReduce, Pig, Hive, Spark, Impala etc.
- Perform architecture design, data modeling, and implementation of Big Data platform and analytic applications for Hitachi Consulting’s clients
- Analyze latest Big Data Analytic technologies and their innovative applications in both business intelligence analysis and new service offerings; bring these insights and best practices to Hitachi Consulting’s Insights and Analytics practice
- Experience Design Experience in Large Complex Multi-terabyte data environments
- Experience with Java/Spring Open Source/SQLExposure to Apache Hadoop ecosystem
- Expertise - Collaborate with AWS field sales, pre-sales, training and support teams to help partners and customers learn and use AWS services such as Amazon Elastic Compute Cloud (EC2), Amazon Data Pipeline, S3, DynamoDB NoSQL, Relational Database Service (RDS), Elastic Map Reduce (EMR) and Amazon Redshift
- Experience of IT platform implementation in a highly technical and analytical role
- Customer facing skills to represent AWS well within the customer’s environment and drive discussions with senior personnel regarding trade-offs, best practices, project management and risk mitigation
- General operational expertise such as good troubleshooting skills, understanding of system’s capacity, bottlenecks, basics of memory, CPU, OS, storage, and networks
Education
Professional Skills
- Assist with the full lifecycle of a Hadoop solution, including requirements analysis, governance, capacity requirements, technical architecture design (including hardware, OS, and network topology), application design, testing, and deployment
- Strong development experience on Spark using Java, Hive, Pig and MapReduce
- Strong development experience on Hive, Pig and MapReduce
- Good communication skills both written an Oral
- Understanding of network concepts and experience managing host level network services
- Experience in designing, implementing and administering highly available Hadoop clusters
- Experience developing solutions utilizing any of the following
How to write Big Data-hadoop Resume
Big Data-hadoop role is responsible for travel, finance, training, integration, database, security, java, architecture, design, digital.
To write great resume for big data-hadoop job, your resume must include:
- Your contact information
- Work experience
- Education
- Skill listing
Contact Information For Big Data-hadoop Resume
The section contact information is important in your big data-hadoop resume. The recruiter has to be able to contact you ASAP if they like to offer you the job. This is why you need to provide your:
- First and last name
- Telephone number
Work Experience in Your Big Data-hadoop Resume
The section work experience is an essential part of your big data-hadoop resume. It’s the one thing the recruiter really cares about and pays the most attention to.
This section, however, is not just a list of your previous big data-hadoop responsibilities. It's meant to present you as a wholesome candidate by showcasing your relevant accomplishments and should be tailored specifically to the particular big data-hadoop position you're applying to.
The work experience section should be the detailed summary of your latest 3 or 4 positions.
Representative Big Data-hadoop resume experience can include:
- Good experience in AVRO, JSON and Parquet file format
- Experience in working with and developing datasets for Tableau developers and Data scientists
- Experience administering Hadoop or other Data Science and Analytics platforms using the technologies above
- Working experience with Linux, Solaris, and UNIX, ideally CentOS using SIMP
- Experience supporting web architectures, including Tomcat, JBoss, and /or Apache Web
- Experience working with technologies such as spark, storm, kafta, hbase, and accumulo
Education on a Big Data-hadoop Resume
Make sure to make education a priority on your big data-hadoop resume. If you’ve been working for a few years and have a few solid positions to show, put your education after your big data-hadoop experience. For example, if you have a Ph.D in Neuroscience and a Master's in the same sphere, just list your Ph.D. Besides the doctorate, Master’s degrees go next, followed by Bachelor’s and finally, Associate’s degree.
Additional details to include:
- School you graduated from
- Major/ minor
- Year of graduation
- Location of school
These are the four additional pieces of information you should mention when listing your education on your resume.
Professional Skills in Big Data-hadoop Resume
When listing skills on your big data-hadoop resume, remember always to be honest about your level of ability. Include the Skills section after experience.
Present the most important skills in your resume, there's a list of typical big data-hadoop skills:
- Good hands on development experience using Unix shell scripting
- Good to have Spark with java experience
- Familiar with HDFS and experience on getting data into HDFS using sqoop
- Min 3 years’ experience in working on Master Data Management tools like Informatica, IBM etc.,
- A strong understanding of product development
- Experience in enterprise big data platforms using Hadoop
List of Typical Skills For a Big Data-hadoop Resume
Skills For Big Data Hadoop & Teradata Architect Resume
- Experience in business domains like B2B, Manufacturing, Communications, Finance and Supply Chain
- Strong expertise in developing the data as a service platform
- Experience with delivering Big Data Solutions in the cloud with AWS or Azure or Google Cloud
- Experience working with MapR vendor ecosystems
- Prioritise and manage workload (both your own and others)
- Generally requires 8-10+ years related experience in enterprise architecture, software development and technology leadership positions
- Over 3 years’ experience with Python(PySpark) and Scala
- Experience in Big Data/Hadoop technologies
- In depth knowledge & experience in Hadoop around all Hadoop ecosystem like PIG, HIVE, Map Reduce, Flume, Kafka, Sqoop
Skills For Big Data Hadoop Engineer Resume
- Experience with NoSQL databases such as HBase, Cassandra, MongoDB
- Experience with SOA architecture design
- Experience with Cloudera/Hortonworks
- Designing ingestion, low latency, visualization clusters to sustain data loads
- Knowledge of capacity planning, management, and troubleshooting for HDFS, YARN/MapReduce, Hive, Pig, Knox , Ambari,Oozie, Machine Learning and HBase
- Work closely with supporting infrastructure groups such as platform, database, storage, and networking teams to troubleshoot issues
Skills For Principal Business Analysis Cons t With Big Data & Hadoop Resume
- Take the initiative to create knowledge base materials dedicated towards operational efficiency while also empowering and enabling the developer community
- Ensure that current trends, developments and improvements in data acquisition, processing and reporting are understood and investigated
- Configure, monitor and support cluster using Cloudera Manager (GUI and command line): component health, performance, resource management, security
- Troubleshoot and resolve Hadoop infrastructure and operational related issues including availability, capacity, and performance
- Provide recommendations for infrastructure requirements based on proactive and regular monitoring and review of the infrastructure
- Participate in infrastructure provisioning
Skills For Big Data / Hadoop Architect Resume
- Identify, develop and execute training/education
- Be a leader in execution and in knowledge sharing
- 2) Reporting tools like Tableau, OBIEE
- Creation of a platform to allow the realisation of our future Machine Learning initiatives
- Work with the Legal and Security teams to establish a strategy for securing and management data appropriately
- Working hands-on with Production-level Big Data and Analytics solutions
- Background with a higher level scripting language, such as Perl, Python, Shell
Skills For Big Data / Hadoop System Administrator Resume
- 3) Python scripting
- Work closely with the development teams to resolve performance issues
- Manage and maintain production and pre-production environments
- Provide off-hours support when needed
- 1) Hadoop Eco system (Mapr)
- 5) RDBMS & Oracle/SQL knowledge
Skills For Senior Data Engineer Big Data-> Hadoop Resume
- 1) BI DW concepts
- Knowledge on HBase, Casandra and Solr
- Work with senior client executives
- Knowledge on HBase, Casandra and Kafka
- The ability to empower others and develop the team
Skills For Q-dna-tl-big Data-hadoop / Java Engineer Resume
- Understands and has implemented highly available, geographically diverse operational platforms
- Understand the needs for analysis of M2M data within the wider organisation and develop a strategy to achieve success
- Drive innovation. People in our Client Delivery & Operations career track drive delivery and capability excellence through the design, development and/or delivery of a solution, service, capability or offering. They grow into delivery-focused roles, and can progress within their current role, laterally or upward
- Learn and grow continuously: Follow personalized training to build skills, while expanding your experience defining and implementing solutions on complex client projects with a scope that is unsurpassed in industry
- Experience with major Hadoop distributions
Skills For Big Data Hadoop AWS Architect Resume
- Experience with MapR Hadoop distributions
- Cloudera 5.x /YARN experience
- Experience with the Big Data platform and application development
- Hands-on experience in Apache Big Data Components/Frameworks
- Deep technical expertise in Spark, Hive, Impala, Kudu
- Configure and support API and OpenSource integrations
- Security + or DoD required certification or ability to obtain required certifications
Skills For Big Data Hadoop Senior Data Modeler Resume
- Preferable knowledge of containerization engine Docker
- CI/CD pipeline setup with Jenkins etc or other tools
- Certificates for Linux, Puppet, Hadoop and VMware
- 4) Cassandra/Mongo DB
- 6) Data Anlaysis