Doug Lautzenheiser

30%
Flag icon
Adversarial augmentation is less common in NLP (an image of a bear with randomly added pixels still looks like a bear, but adding random characters to a random sentence will likely render it gibberish), but perturbation has been used to make models more robust. One of the most notable examples is BERT, where the model chooses 15% of all tokens in each sequence at random and chooses to replace 10% of the chosen tokens with random words.
Designing Machine Learning Systems: An Iterative Process for Production-Ready Applications
Rate this book
Clear rating
Open Preview