Docker for Data Science: Building Scalable and Extensible Data Infrastructure Around the Jupyter Notebook Server
Learn Docker "infrastructure as code" technology to define a system for performing standard but non-trivial data tasks on medium- to large-scale data sets, using Jupyter as the master controller.
It is not uncommon for a real-world data set to fail to be easily managed. The set may not fit well into access memory or may require prohibitively long processing. These are significant challenges to skilled software engineers and they can render the standard Jupyter system unusable. 

As a solution to this problem, Docker for Data Science proposes using Docker. You will learn how to use existing pre-compiled public images created by the major open-source technologies—Python, Jupyter, Postgres—as well as using the Dockerfile to extend these images to suit your specific purposes. The Docker-Compose technology is examined and you will learn how it can be used to build a linked system with Python churning data behind the scenes and Jupyter managing these background tasks. Best practices in using existing images are explored as well as developing your own images to deploy state-of-the-art machine learning and optimization algorithms.
What  You'll Learn 
  • Master interactive development using the Jupyter platform
  • Run and build Docker containers from scratch and from publicly available open-source images
  • Write infrastructure as code using the docker-compose tool and its docker-compose.yml file type
  • Deploy a multi-service data science application across a cloud-based system

Who This Book Is For
Data scientists, machine learning engineers, artificial intelligence researchers, Kagglers, and software developers
1133116908
Docker for Data Science: Building Scalable and Extensible Data Infrastructure Around the Jupyter Notebook Server
Learn Docker "infrastructure as code" technology to define a system for performing standard but non-trivial data tasks on medium- to large-scale data sets, using Jupyter as the master controller.
It is not uncommon for a real-world data set to fail to be easily managed. The set may not fit well into access memory or may require prohibitively long processing. These are significant challenges to skilled software engineers and they can render the standard Jupyter system unusable. 

As a solution to this problem, Docker for Data Science proposes using Docker. You will learn how to use existing pre-compiled public images created by the major open-source technologies—Python, Jupyter, Postgres—as well as using the Dockerfile to extend these images to suit your specific purposes. The Docker-Compose technology is examined and you will learn how it can be used to build a linked system with Python churning data behind the scenes and Jupyter managing these background tasks. Best practices in using existing images are explored as well as developing your own images to deploy state-of-the-art machine learning and optimization algorithms.
What  You'll Learn 
  • Master interactive development using the Jupyter platform
  • Run and build Docker containers from scratch and from publicly available open-source images
  • Write infrastructure as code using the docker-compose tool and its docker-compose.yml file type
  • Deploy a multi-service data science application across a cloud-based system

Who This Book Is For
Data scientists, machine learning engineers, artificial intelligence researchers, Kagglers, and software developers
34.99 In Stock
Docker for Data Science: Building Scalable and Extensible Data Infrastructure Around the Jupyter Notebook Server

Docker for Data Science: Building Scalable and Extensible Data Infrastructure Around the Jupyter Notebook Server

by Joshua Cook
Docker for Data Science: Building Scalable and Extensible Data Infrastructure Around the Jupyter Notebook Server

Docker for Data Science: Building Scalable and Extensible Data Infrastructure Around the Jupyter Notebook Server

by Joshua Cook

eBook1st ed. (1st ed.)

$34.99 

Available on Compatible NOOK devices, the free NOOK App and in My Digital Library.
WANT A NOOK?  Explore Now

Related collections and offers


Overview

Learn Docker "infrastructure as code" technology to define a system for performing standard but non-trivial data tasks on medium- to large-scale data sets, using Jupyter as the master controller.
It is not uncommon for a real-world data set to fail to be easily managed. The set may not fit well into access memory or may require prohibitively long processing. These are significant challenges to skilled software engineers and they can render the standard Jupyter system unusable. 

As a solution to this problem, Docker for Data Science proposes using Docker. You will learn how to use existing pre-compiled public images created by the major open-source technologies—Python, Jupyter, Postgres—as well as using the Dockerfile to extend these images to suit your specific purposes. The Docker-Compose technology is examined and you will learn how it can be used to build a linked system with Python churning data behind the scenes and Jupyter managing these background tasks. Best practices in using existing images are explored as well as developing your own images to deploy state-of-the-art machine learning and optimization algorithms.
What  You'll Learn 
  • Master interactive development using the Jupyter platform
  • Run and build Docker containers from scratch and from publicly available open-source images
  • Write infrastructure as code using the docker-compose tool and its docker-compose.yml file type
  • Deploy a multi-service data science application across a cloud-based system

Who This Book Is For
Data scientists, machine learning engineers, artificial intelligence researchers, Kagglers, and software developers

Product Details

ISBN-13: 9781484230121
Publisher: Apress
Publication date: 08/23/2017
Sold by: Barnes & Noble
Format: eBook
Pages: 257
File size: 2 MB

About the Author

Joshua Cook is a mathematician. He writes code in Bash, C, and Python and has done pure and applied computational work in geo-spatial predictive modeling, quantum mechanics, semantic search, and artificial intelligence. He also has 10 years experience teaching mathematics at the secondary and post-secondary level. His research interests lie in high-performance computing, interactive computing, feature extraction, and reinforcement learning. He is always willing to discuss orthogonality or to explain why Fortran is the language of the future over a warm or cold beverage.

Table of Contents

1. Introduction
2. Docker3. Jupyter4. Docker Client5. The Dockerfile6. Docker Hub7. The Opinionated Jupyter Stacks8. The Data Stores9. Docker Compose10. Interactive Development
From the B&N Reads Blog

Customer Reviews