Jump to ratings and reviews
Rate this book

Deep Learning for Natural Language Processing

Rate this book
Explore the most challenging issues of natural language processing, and learn how to solve them with cutting-edge deep learning!

Inside Deep Learning for Natural Language Processing you’ll find a wealth of NLP insights,

An overview of NLP and deep learning
One-hot text representations
Word embeddings
Models for textual similarity
Sequential NLP
Semantic role labeling
Deep memory-based NLP
Linguistic structure
Hyperparameters for deep NLP

Deep learning has advanced natural language processing to exciting new levels and powerful new applications! For the first time, computer systems can achieve "human" levels of summarizing, making connections, and other tasks that require comprehension and context. Deep Learning for Natural Language Processing reveals the groundbreaking techniques that make these innovations possible. Stephan Raaijmakers distills his extensive knowledge into useful best practices, real-world applications, and the inner workings of top NLP algorithms.

Purchase of the print book includes a free eBook in PDF, Kindle, and ePub formats from Manning Publications.

About the technology
Deep learning has transformed the field of natural language processing. Neural networks recognize not just words and phrases, but also patterns. Models infer meaning from context, and determine emotional tone. Powerful deep learning-based NLP models open up a goldmine of potential uses.

About the book
Deep Learning for Natural Language Processing teaches you how to create advanced NLP applications using Python and the Keras deep learning library. You’ll learn to use state-of the-art tools and techniques including BERT and XLNET, multitask learning, and deep memory-based NLP. Fascinating examples give you hands-on experience with a variety of real world NLP applications. Plus, the detailed code discussions show you exactly how to adapt each example to your own uses!

What's inside

Improve question answering with sequential NLP
Boost performance with linguistic multitask learning
Accurately interpret linguistic structure
Master multiple word embedding techniques

About the reader
For readers with intermediate Python skills and a general knowledge of NLP. No experience with deep learning is required.

About the author
Stephan Raaijmakers is professor of Communicative AI at Leiden University and a senior scientist at The Netherlands Organization for Applied Scientific Research (TNO).

Table of Contents
PART 1 INTRODUCTION
1 Deep learning for NLP
2 Deep learning and The basics
3 Text embeddings
PART 2 DEEP NLP
4 Textual similarity
5 Sequential NLP
6 Episodic memory for NLP
PART 3 ADVANCED TOPICS
7 Attention
8 Multitask learning
9 Transformers
10 Applications of Hands-on with BERT

296 pages, Paperback

Published December 6, 2022

8 people are currently reading
44 people want to read

About the author

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
4 (33%)
4 stars
5 (41%)
3 stars
1 (8%)
2 stars
2 (16%)
1 star
0 (0%)
Displaying 1 - 3 of 3 reviews
1 review
January 12, 2023
I read the book – Deep Learning for Natural Language Processing – by Stephen Raaijmakers, almost all sections. I find it well organized. The main reason I am saying this is that the author has taken the reader’s journey into account while designing the flow.
The book (as with all books!) starts with an introduction about Deep Learning for NLP and then gets into the core techniques namely the embeddings, text similarity, etc. before getting on to the advanced topics like Attention-based mechanisms. It then culminates into a hands-on section with BERT. Overall, the flow gives an extremely good reader journey that acts as a guide, while simultaneously the reader can skip areas that are familiar thus accelerating the journey.

The author has taken good care to sprinkle in generous snippets of code along with clear comments at various places in the book. This encourages the reader to not just read but also try out! For the lazy reader, there are also parts of the indicative outputs for some of the important sections. So, in essence, it is both theory and practice of the subject in whatever limited areas it is in, within the vast range of topics in the subject. I have not tested the code myself, but the flow does make sense. The code sections are in Python using standard libraries like Keras for deep learning. There is a facility for Live Book access which offers collaborating with the author as well as with other Live users.
Some of the areas that need improvement include the need for better representation of the flow diagrams, architecture diagrams and a link to some of the advancements that happen so quickly in this field. This will help be more interactive with the reader. A few industry case studies would also help the reader quickly relate to their respective use cases...

But overall, it is a good book – read fast before the technology goes obsolete!
Profile Image for Rick Sam.
432 reviews156 followers
October 23, 2022
A Book focused on engineering & industry-oriented in Natural Language Processing.

Many detailed examples with code.
I do think, one has to have solid understanding of mathematics, either-way.

1. What is inside this?

Introduction
Deep NLP
Advanced Topics

2. Thoughts & Summary

There are many tutorials.
I'd say, find works, which you desire to implement and build. Baby step, understanding from the work, Transformer Model.

Recall, Transformer contains Encoder-Decoder architecture.
Core aspect of Transformer, Multi-Head Attention.

Attention Mechanism:


Author: Renu Khandelwal

Recall, to solve fixed vector in Encoder-Decoder architecture, attention mechanism were introduced.

So - How was it solved?

a. Alignment score
b. Weight
c. Context Vector

Alignment Score:

So in Encoder-Decoder, we have hidden states and previous decoder steps.

We take both to compute score, alignment score - Why?
To capture how well elements of input align with current position.

Weights:

We apply Softmax operation to captured Alignment score.

Context Vector:

This is given to decoder in each time step through weighted sum of all encoder hidden states.

attention (q, K, V) = weighted sum of vector value, each vector is attended to corresponding key.

We have soft-attention and hard-attention.

Soft-attention is deterministic
Hard-attention is stochastic.

Attention mechanism seems to lack reasoning and understanding.


Deus Vult,
Gottfried
Profile Image for Asheesh Mathur.
1 review
Read
December 9, 2022
Must for NLP explorers & innovators. I picked this title primarily for Attention & Transformer mechanism, covers it well.
All NLP concepts are explained well and related. Though it can safely be implemented in both PyTorch & Tensor flow.

Considering popularity of PyTorch. It makes sense to incorporate corresponding implementation in PyTorch as well.
Displaying 1 - 3 of 3 reviews

Can't find what you're looking for?

Get help and learn more about the design.