This four-day workshop covers data science and machine learning workflows at scale using Apache Spark 2 and other key components of the Hadoop ecosystem. The workshop emphasizes the use of data science and machine learning methods to address real-world business challenges. The workshop is designed for data scientists who currently use Python or R to work with smaller datasets on a single machine and who need to scale up their analyses and machine learning models t ...
Cena kurzu: ...
2.180 EUR / Kurz
... včetně DPH: 2.638 EUR / Kurz
Objednat - pro přihlášení na kurz/školení klikněte na zvolený termín školení a místo konání
This course is suitable for developers, data analysts, and statisticians with basic knowledge of Apache Hadoop: HDFS, MapReduce, Hadoop Streaming, and Apache Hive as well as experience working in Linux environments.
Lektoři kurzu
Lektoři z firmy: DataScript s.r.o.
[Kurz] Program kurzu (obsah přednášky/semináře/rekvalifikace/studia) ...
Goals The workshop includes brief lectures, interactive demonstrations, hands-on exercises, and discussions covering topics including:
Overview of data science and machine learning at scale
Overview of the Hadoop ecosystem Working with HDFS data and Hive tables using Hue
Introduction to Cloudera Data Science Workbench
Overview of Apache Spark 2
Reading and writing data
Inspecting data quality
Cleansing and transforming data
Summarizing and grouping data
Combining, splitting, and reshaping data
Exploring data
Configuring, monitoring, and troubleshooting
Spark applications
Overview of machine learning in
Spark MLlib Extracting, transforming, and selecting features
Building and evaluating regression models
Building and evaluating classification models
Building and evaluating clustering models
Cross-validating models and tuning hyperparameters
Building machine learning pi * The workshop includes brief lectures, interactive demonstrations, hands-on exercises, and discussions covering topics including:
Overview of data science and machine learning at scale
Overview of the Hadoop ecosystem Working with HDFS data and Hive tables using Hue
Introduction to Cloudera Data Science Workbench
Overview of Apache Spark 2
Reading and writing data
Inspecting data quality
Cleansing and transforming data
Summarizing and grouping data
Combining, splitting, and reshaping data
Exploring data
Configuring, monitoring, and troubleshooting
Spark applications
Overview of machine learning in
Spark MLlib Extracting, transforming, and selecting features
Building and evaluating regression models
Building and evaluating classification models
Building and evaluating clustering models
Cross-validating models and tuning hyperparameters
Building machine learning pipelines
Deploying machine learning models
Outline Read the entire course outline for more details. Prerequisites
Workshop participants should have a basic understanding of Python or R and some experience exploring and analyzing data and developing statistical or machine learning models. Knowledge of Hadoop or Spark is not required.
[Kurz] Cíl školení / poznámka ke kurzu...
Goals The workshop includes brief lectures, interactive demonstrations, hands-on exercises, and discussions covering topics including:
Overview of data science and machine learning at scale
Overview of the Hadoop ecosystem Working with HDFS data and Hive tables using Hue
Introduction to Cloudera Data Science Workbench
Overview of Apache Spark 2
Reading and writing data
Inspecting data quality
Cleansing and transforming data
Summarizing and grouping data
Combining, splitting, and reshaping data
Exploring data
Configuring, monitoring, and troubleshooting
Spark applications
Overview of machine learning in
Spark MLlib Extracting, transforming, and selecting features
Building and evaluating regression models
Building and evaluating classification models
Building and evaluating clustering models
Cross-validating models and tuning hyperparameters
Building machine learning pipelines
Deploying machine learning models
[Školení] Další popis kurzu (úroveň, minimální znalosti, informace o cenách kurzu) ...
Prerequisites
Workshop participants should have a basic understanding of Python or R and some experience exploring and analyzing data and developing statistical or machine learning models. Knowledge of Hadoop or Spark is not required.