Empower your deep learning models by harnessing some immensely powerful image processing algorithms.
Many deep learning courses start with an introduction to the basic image processing techniques (like resizing, cropping, color-to-grayscale, rotation, etc.) but only provide a cursory glance at these concepts.
In this journey, we often discount the importance of some immensely powerful image processing algorithms which can do wonders to our final model predictions.
For folks who are familiar with the general ML paradigm, you know how vital data cleaning and feature engineering is for the success of any model. …
In the model training phase, a model learns its parameters. But there are also some secret knobs, called hyperparameters, that the model cannot learn on its own — these are left to us to tune. Tuning hyperparameters can significantly improve model performance. Unfortunately, there is no definite procedure to calculate these hyperparameter values. This is why hyperparameter tuning is often regarded as an art than science.
In this article, I discuss the 3 most popular hyperparameter tuning algorithms — Grid search, Random search, and Bayesian optimization.
Model training is a process through which a model learns its parameters. Besides this, every model also has some hyperparameters which it cannot learn, but can be tuned for. In contrast to model parameters which are learned during training, model hyperparameters are set by the data scientist ahead of training. This process of tuning various hyperparameter values is called hyperparameter tuning. (Note the usage of the term hyperparameter tuning, and not hyperparameter training). …
With so many rampant advances taking place in Natural Language Processing (NLP), it can sometimes become overwhelming to be able to objectively understand the differences between the different models.
It is important to understand not only how these models differ from each other, but also how one model overcomes the shortcomings of another.
Below I have drawn out a comparison between two very popular models — Word2Vec and BERT.
Word2Vec models generate embeddings that are context-independent: ie - there is just one vector (numeric) representation for each word. …