Big Data with Spark

  • Home
  • Big Data with Spark
Shape Image One
0(0)

Big Data with Spark

What Will I Learn?

  • Frame big data analysis problems as Apache Spark scripts
  • Optimize Spark jobs through partitioning, caching, and other techniques
  • Process continual streams of data with Spark Streaming
  • Traverse and analyze graph structures using GraphX
  • Develop distributed code using the Scala programming language
  • Build, deploy, and run Spark scripts on Hadoop clusters
  • Transform structured data using SparkSQL and DataFrames

Topics for this course

7 Lessons

Getting Started

Introduction to the course00:00:00
Setting up enviroment00:00:00

Intro to Scala

Intro to Spark by example

Advance Examples in Spark

Running Spark in a Cluster

SparkSQL, DataFrames, and DataSets

Machine Learning with MLLib

Intro to Streaming with Spark

Intro to GraphX

Way Forward

Apache Spark Interview Questions And Answers

1,000.00 600.00

Requirements

  • Some prior programming or scripting experience is required. A crash course in Scala is included, but you need to know the fundamentals of programming in order to pick it up.
  • You will need a desktop PC and an Internet connection. The course is created with Windows in mind, but users comfortable with MacOS or Linux can use the same tools.
  • The software needed for this course is freely available, and I'll walk you through downloading and installing it.

Target Audience

  • Software engineers who want to expand their skills into the world of big data processing on a cluster
  • If you have no previous programming or scripting experience, you'll want to take an introductory programming course first.