Aditya Chatterjee's Blog, page 64
November 27, 2022
Abstract Data Type in Data Structure
In this article, we have explained the concept of Abstract Data Type in Data Structure in depth along with code examples and theoretical examples of Data Structures. The best examples of Abstract Data Type come from C++ STL (Standard Template Library) and Java Collections.
Table of contents:
Introduction to Abstract Data Type (ADT)UtilityModel of Abstract Data TypeImplementation of ADT in JavaADT example with shape, rectangleADT example with interfaceWidely Utilised ADTs7.1. List ADT
7.2...
Data Wrangling
In this article, we have explored the concept of Data Wrangling which is a critical process/ phase in Data Science. We have explored the tools used for Data Wrangling as well.
Table of contents:
Introduction to Data WranglingUtility of data wranglingImportance of the processTools EmployedApproach to Data WranglingPointData WranglingWhat is it?It is the processing of a dataset for further analysis or making it compatiblePart of?A phase in Data ScienceToolsMicrosoft Excel, Google Sheets...2D Sliding Window
In this article, we have explored how to apply the concept of Sliding Window on a 2D array that is 2D Sliding Window. We have presented the concept along with implementation and time and space complexity.
Table of ContentsSliding WindowUse of Sliding WindowSliding window on a 1D arraySliding window on a 2D arrayTime and Space complexity of 2D sliding windowSliding WindowSeveral methods are used for solving computational problems. These include arithmetic operations, loops, conditional ...
Nature Inspired Algorithms
Nature does not hurry yet everything is accomplished.
ContentsThere is no better teacher than nature herself , she never rushes but everything is established. Have you ever wondered how our human brain is able to distinguish thousands of dog breeds from cats' even if we had never seen few of the breeds before too? How do ants find their way to the food? Let's learn about algorithms inspired from mother nature and know their applications in varied fields !
Optimisation AlgorithmImportant t...
Types of Gradient Optimizers in Deep Learning
In this article, we will explore the concept of Gradient optimization and the different types of Gradient Optimizers present in Deep Learning such as Mini-batch Gradient Descent Optimizer.
Table of ContentsMachine Learning & Deep LearningOptimization Algorithms or OptimizersGradient Descent Optimizer and its typesOther types of OptimizersSummary of the detailsPointGradient OptimizerWhat is it?Optimizers are used for minimizing the loss of data or the loss function and maximizing the ef...The Vision Transformer
In past recent years, the transformer model hit a big success especially in natural language processing applications like language translation and automatic question answering where the model achieved state of the art results. After this success, researches tried to take benefit of this model in many other domains. In 2020 Alexey Dosovitskiy et al [1] used the transformer model to build a new network for image recognition called the vision transformer, that we will try to explain and to implemen...
VGG54 and VGG22
VGG54 and VGG22 are loss metrics to compare high and low resolution images by considering the feature maps generated by VGG19 neural network model.
This was first introduced in the paper "Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network" by Christian Ledig from Twitter. It was published in 2017.
VGG54VGG54 is defined as the loss which is equal to euclidean distance between ϕ5,4 feature maps from high and low resolution images generated using SRGAN-VGG19 based...
November 25, 2022
Gradient Accumulation [+ code in PyTorch]
Gradient Accumulation is an optimization technique that is used for training large Neural Networks on GPU and help reduce memory requirements and resolve Out-of-Memory OOM errors while training. We have explained the concept along with Pytorch code.
Table of contents:
Background on training Neural NetworksWhat is the problem in this training process?Gradient AccumulationGradient Accumulation in PytorchProperties of Gradient AccumulationConcluding NoteFollowing table summarizes the concep...
[SOLVED] failed to solve with frontend dockerfile.v0
In this article, we have explored the reason behind the error "failed to solve with frontend dockerfile.v0" and presented multiple ways (5) to fix it. This error is related to Buildkit and LLB component of Docker.
Table of contents:
The Error: failed to solve with frontend dockerfile.v0Fix 1: Disable Buildkit of DockerFix 2: Ensure Dockerfile is named correctlyFix 3: Specify filename in docker-compose.ymlFix 4: Delete token seedFix 5: Disable Buildkit from settingsConcluding NoteFollowi...
November 24, 2022
Calculate mean and std of Image Dataset
In this article, we have explained how to calculate the mean and standard deviation (std) of an image dataset which can be used to normalize images in the dataset for effective training of Neural Networks.
The challenge is to compute mean and std in batches as loading the entire image dataset will have significant memory overhead. We have presented the Python code using Pytorch.
Table of contents:
Why we need mean and std of Image Dataset?Calculate mean and std of Image DatasetCode to calcula...