Speeding up the design and implementation of Deep Learning solutions using Apache SparkAbout This Book
- Dive into the world of distributed deep learning with Apache Spark
- Train various types of neural networks with popular deep learning libraries like BigDL, deeplearning4j and TensorFlow.
- Practical guide to help you develop deep learning applications in Spark to handle larger and complex data sets.
If you are a Scala developer, a data scientist or a data analyst who wants to learn how to use Spark for implementing efficient deep learning models, this is the book for you. Knowledge of the core machine learning concepts and some exposure to Spark will be helpfulWhat You Will Learn
- Understanding the basics of Deep Learning
- Getting started with Apache Spark (Scala and Python)
- Moving to a practical deep dive into deep learning algorithms
- Getting hands-on with the most popular frameworks (DeepLearning4j, Tensorflow and BigDL) for deep learning
- Experimenting with deep learning concepts, algorithms, and the toolbox for deep learning
- Learning principles of distributing modelling and different types of neural networks.
Deep Learning is a subset of Machine Learning wherein data sets with several layers of complexity can be processed. This book will address the sheer complexity of the technical and analytical parts, and the speed at which Deep Learning solutions can be implemented on Apache Spark.
You will start with understanding the fundamentals of Apache Spark and deep learning. You will see how to set up Spark for performing deep learning, and learn principles of distributed modelling and different types of neural nets. You will then implement deep learning models like CNN, RNN, LSTMs on Spark. Get a hands-on experience of what it takes and a general feeling of the complexity we are dealing with. During the course of the book, you will use popular deep learning frameworks such as Tensorflow, deeplearning4j and BigDL to train your distributed models.
Towards the end of the book, you'll have gained experience with implementations of your models on a variety of use-cases.
|Product dimensions:||7.50(w) x 9.25(h) x 0.67(d)|
About the Author
Guglielmo Iozzia is currently a Big Data Delivery Manager at Optum in Dublin (Ireland).
He completed his Masters' Degree in Biomedical Engineering at the University of Bologna (Italy). For his final year engineering project, he designed and implemented a diagnostic system to predict the behaviour of the intracranial pressure on patients in neurosurgery intensive care. The project was part of a bigger one in a collaboration between the DEIS (Department of Engineering, Information and Systems) of the University of Bologna and the Policlinico Hospital of Milan and it was carried out using real patients' data.
After his graduation, he joined a newborn IT company in Bologna which had implemented a new system to manage online payments. The company grew rapidly and expanded its business on different sectors (banking, manufacturing, public administration), so he had a chance to work on complex Java projects for different customers in different areas. Among these projects, GDPM deserves special mention. It is a predictive maintenance system for the machinery produced by the G.D group. Guglielmo was an active part of the initial design and first implementation and release to production.
6 years later he moved to Rome where after a short experience as a consultant in IFAD, he moved to RAI Net (RAI Television group, before joining the IT department of FAO, an agency of the United Nations, for more than 5 years.
In 2013 he had a chance to join IBM in Dublin. There he improved his DevOps skills working mostly on cloud-based applications and the opportunity to move to Big Data and Machine Learning applied to Operations first and Cybersecurity then.
At the end of September 2016 he moved to Optum (which is part of the UnitedHealth Group), the healthcare IT company he works for at present time. He and his teams are involved in Big Data and Analytics projects mostly in the Payment Integrity space, in particular in the Fraud, Waste and Abuse detection and prevention.
He is a golden member and writes articles at DZone and maintains a personal blog to share his finding and thoughts about different tech topics (Java, Scala, Big Data, AI, DevOps, Open Source).
Table of Contents
Table of Contents
- The Apache Spark Ecosystem
- Deep Learning Basics
- Extract, Transform, Load
- Convolutional Neural Networks
- Recurrent Neural Networks
- Training Neural Networks with Spark
- Monitoring and Debugging Neural Network Training
- Interpreting Neural Network Output
- Deploying on a Distributed System
- NLP Basics
- Textual Analysis and Deep Learning
- Image Classification
- What’s Next for Deep Learning?