Below is a list of posts I have created with the goal of teaching deep learning with a more intuitive approach or to simply share my ideas with people that might think they’re interest.
Deep Learning Posts
- A Causality Summary Part IIIMain Reference: https://arxiv.org/abs/2405.08793 Learning in Latent Variable Models Latent variable models are powerful extensions of probabilistic models. They introduce hidden or unobserved variables to explain dependencies in data that cannot be captured solely by observed variables. These latent variables often represent underlying processes or structures that are not directly measurable. Latent Variable Representation Given a… Read more: A Causality Summary Part III
- A Causality Summary Part IIMain Reference: https://arxiv.org/abs/2405.08793 Probabilistic Graphical Models Probabilistic Graphical Models (PGMs) are powerful tools that represent the joint probability distribution of a set of random variables in terms of their conditional dependencies, which are typically defined by a graph structure. A fundamental task in PGMs is sampling from the joint distribution, and this can be achieved… Read more: A Causality Summary Part II
- A Causality Summary Part IMain Reference: https://arxiv.org/abs/2405.08793 Have you ever thought about the word “causal” in the sentence we’ve all heard: “Smoking causes lung cancer”? It sounds pretty simple, right? The way I see it, at least, is the following: if someone has lung cancer and they smoke, we assume smoking caused it. smoke — causes —> cancer Okay, but… Read more: A Causality Summary Part I
- MLP-Mixer (Theory)TL;DR – This is the first article I am writing to report on my journey studying the MPL-Mixer architecture. It will cover the basics up to an intermediate level. The goal is not to reach an advanced level. Reference: https://arxiv.org/abs/2105.01601 Introduction From the original paper it’s stated that We propose the MLP-Mixer architecture (or “Mixer”… Read more: MLP-Mixer (Theory)