With so many rampant advances taking place in Natural Language Processing (NLP), it can sometimes become overwhelming to be able to objectively understand the differences between the different models.

It is important to understand not only how these models differ from each other, but also how one model overcomes the…


Learn to correctly interpret the coefficients of Logistic Regression and in the process naturally derive its cost function — the Log Loss!

Source: Unsplash

Overview

Models like Logistic Regression often win over their complex counterpart models when explainability and interpretability are crucial to the solution. …


A logical and sequential roadmap to understanding the advanced concepts in training deep neural networks.

Agenda

We will break our discussion into 4 logical parts that build upon each other. For the best reading experience, please go through them sequentially:

1. What is Vanishing Gradient? Why is it a problem? Why…


Ever felt curious about this well-known axiom: “Always scale your features”? Well, read on to get a quick graphical and intuitive explanation!

Motivation

I am sure all of us have seen this popular axiom in machine learning: Always scale your features before training!


Do you mean the cell that contains the "center" of an object?


Ace your ML interview by quickly understanding which real-world use cases demand higher precision, and which ones demand a higher recall and why?

Why you should read this article?

All machine learning interviews expect you to understand the practical application of precision-recall tradeoff in real-world use cases, beyond just the definitions and formulas.

I have tried…


Focal Loss explained in simple words to understand what it is, why is it required and how is it useful — in both an intuitive and mathematical formulation.

Binary Cross Entropy Loss

Most object detector models use the Cross-Entropy Loss function for their learning. The idea is to have a loss function that predicts…


Empower your deep learning models by harnessing some immensely powerful image processing algorithms.

Source

Motivation

Many deep learning courses start with an introduction to the basic image processing techniques (like resizing, cropping, color-to-grayscale, rotation, etc.) but only provide a cursory glance at these concepts. …


In the model training phase, a model learns its parameters. But there are also some secret knobs, called hyperparameters, that the model cannot learn on its own — these are left to us to tune. Tuning hyperparameters can significantly improve model performance. Unfortunately, there is no definite procedure to calculate…

Lavanya Gupta

AWS ML Specialist | Instructor & Mentor for Data Science

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store