Academic

Academic, Innovative, Uncategorized

Probabilistic Generative Models Overview

Back to Blog 5 min read This post is the first and introductory post in the series: From Probabilistic Modeling to Generative Modeling. It is my attempt to reproduce the knowledge I gained from learning about probabilistic generative models. In this entry, I break down the general concepts needed to understand five probabilistic generative models: Gaussian Mixture Model (GMM), Variational Autoencoders (VAE), Normalizing Flows (NF), Generative Adversarial Networks (GAN), and Diffusion Models (DM), which will be explained in the next posts. In this series, these models will be presented in a logical order that I find more intuitive, rather than following a strict categorization. Before starting, it is necessary to understand why generative modeling is useful. 1 – Why Probabilistic Generative Modeling? To answer this question, we need to understand the difference between discriminative and generative models. Original Image P(y=cat|x) = 0.95 P(y=dog|x) = 0.05 Noisy Image P(y=cat|x) = 0.1 P(y=dog|x) = 0.9 The classifier predicts probabilities of labels given images, meaning it learns conditional probability. Classification algorithms summarize complex inputs into simple outputs, discarding most of the original information. 2 – Why Probability? Probability is the mathematical framework used to quantify uncertainty and reason about data. 2.1 – Bayesian vs Frequentist The frequentist approach focuses on long-run frequencies, while the Bayesian approach focuses on belief and uncertainty. 2.2 – Key Rules Sum Rule [ P(X) = P(X, Y) + P(X, bar{Y}) ] Marginalization [ P(X) = sum P(X, y) ] Conditional Probability [ P(Y|X) = frac{P(X,Y)}{P(X)} ] Bayes Theorem [ P(Y|X) = frac{P(X|Y)P(Y)}{P(X)} ] 3 – Probabilistic Modeling Maximum Likelihood Estimation [ hat{theta} = arg max log p(D | theta) ] 5 – Latent Generative Models Latent Variables Latent variables represent hidden structures in data. Latent variable models capture hidden patterns. 6 – Hands-On: Generating Images Images can be treated as samples drawn from a probability distribution. 8 – Sources Deep Learning Book Bishop Book Probabilistic Machine Learning “The key is not to prioritize what’s on your schedule, but to schedule your priorities.” — Stephen Covey Key Takeaways 80% of your exam marks come from roughly 20% of the syllabus. Identify high-yield topics using past papers, syllabus weighting, and senior advice. Replace passive study habits with active recall and spaced repetition. The time you save should fuel innovation — projects, content, and side businesses. Use the Focus → Filter → Execute framework to structure every study session. Related Articles Comments

Academic, Innovative

Model Checking: How is everything connected

Model Checking: How is everything connected Introduction This is the final blog in our series on Model Checking. Since we now have knowledge of typical modelling techniques (mainly Kripke structures) and property specification (using LTL and CTL), we are ready to understand how to check the correctness of properties in a given model. Verification of LTL Formulas Intersection of Büchi Automata For two Büchi automata (B) and (B’), we can build a Büchi automaton for the intersection of their (omega)-languages. The construction is similar to that of finite automata, where the resulting automaton’s states are pairs of states from both automata. This is also similar to the synchronous product of Kripke structures discussed in the previous blog. The algorithm is defined as follows: Input: Two Büchi automata (B = (Q, Sigma, delta, Q_0, F)) and (B’ = (Q’, Sigma, delta’, Q’_0, F’)). Construct a new Büchi automaton (B_{cap}) whose state space is (Q times Q’). The initial states are all pairs ((q, q’)) where (q in Q_0) and (q’ in Q’_0). The transition relation is defined synchronously: ((q, q’) xrightarrow{a} (r, r’)) if (q xrightarrow{a} r) in (B) and (q’ xrightarrow{a} r’) in (B’). The set of accepting states is (F times F’). Return the Büchi automaton (B_{cap}). Emptiness Problem on Büchi Automata Checking whether a Büchi automaton can produce any (omega)-word at all is often called the emptiness problem. This is an efficient algorithm that traverses the automaton and checks for the existence of a lasso with at least one accepting state in its cycle. A lasso is a path from an initial state to a state that is reachable from itself. The algorithm for checking whether a Büchi automaton is empty is given by: Input: A Büchi automaton (B = (Q, Sigma, delta, Q_0, F)). Compute the set of states reachable from the initial states (Q_0). Compute the strongly connected components (SCCs) of the reachable subgraph. Check whether there exists an SCC that contains at least one accepting state from (F) and at least one cycle. If such an SCC exists, return non-empty. Otherwise, return empty. LTL Model Checking In the previous blog, we discussed the transformation of both LTL formulae and Kripke structures into Büchi automata. These transformations form the core of LTL model checking. Let us assume that we have the automata (B_M) for the model and (B_{varphi}) for the LTL formula (varphi). Checking the property on the model is equivalent to proving that the language of all possible paths generated by the model is a subset of the language generated by the formula. In other words, every execution of the model satisfies the property: [ L(B_M) subseteq L(B_{varphi}) ] This can be rewritten as: [ L(B_M) cap overline{L(B_{varphi})} = emptyset ] Using this formulation, the LTL model checking algorithm can be defined as follows: Input: A Kripke structure (M) and an LTL formula (varphi). Build the Büchi automaton (B_M) for the model (M). Build the Büchi automaton (B_{varphi}) for the LTL formula (varphi). Construct the complement (overline{B_{varphi}}) (using Safra’s construction, later optimized by Piterman). Build the intersection of (B_M) and (overline{B_{varphi}}). Check whether the resulting Büchi automaton generates any word. Return whether (B_M cap overline{B_{varphi}}) is empty. However, the construction in step 4 is expensive and often infeasible for large models, as it has a worst-case complexity of (2^{O(n log n)}). For this reason, practical model checkers avoid explicit complementation. Instead, we build the Büchi automaton for the negated formula (lnot varphi), leading to a more efficient algorithm: Input: A Kripke structure (M) and an LTL formula (varphi). Build the Büchi automaton (B_M) for the model (M). Build the Büchi automaton (B_{lnot varphi}) for the LTL formula (lnot varphi). Build the intersection of (B_M) and (B_{lnot varphi}). Check whether the resulting Büchi automaton generates any word. Return whether (B_M cap B_{lnot varphi}) is empty. Verification of CTL Formulas Unlike LTL, CTL model checking is performed directly on the Kripke structure without translating formulas into automata. CTL formulas are evaluated using graph-based algorithms that rely on fixpoint computations, strongly connected components, and backward reachability. The following two algorithms are examples for CTL Model Checking, where the output are all the states on which the formula holds. Checking if a Kripke structure satisfies a CTL property is then checking if an initial state is in the set of the returned states. EG Operator The formula (EG , varphi) holds in a state if there exists a path starting from that state such that (varphi) holds globally along the entire path. Compute the set of states that satisfy (varphi). Restrict the Kripke structure to these states. Compute the strongly connected components of the restricted graph. Mark states that belong to SCCs containing a cycle. Return all states that can reach such SCCs. EU Operator The formula (varphi , EU , psi) holds in a state if there exists a path where (psi) eventually holds and (varphi) holds at all preceding states. Initialize a set with all states that satisfy (psi). Iteratively add states that satisfy (varphi) and have a transition to a state already in the set. Repeat until a fixpoint is reached. Return the resulting set of states. Conclusion Model checking provides a powerful and fully automated technique for verifying complex systems, especially in security-critical and safety-critical domains. By rigorously exploring all possible executions, it can uncover subtle errors that are difficult to detect through testing alone. Understanding the theoretical foundations behind model checking, including automata theory and temporal logic, allows practitioners to better appreciate its strengths, limitations, and practical optimizations. This knowledge is essential for effectively applying model checking to real-world systems. Previous Post Popular Posts Probabilistic Generative Models Overview Gaussian Mixture Models Explained Top 7 AI writing tools for Engineering Students Top AI Posting Tools Every Engineering Student Should Use to Be Innovative Without Design Skills Living in Germany as Engineering Student : Why is it hard living in Germany without speaking German?

Scroll to Top