AN ANALYSIS OF CONCEPTS AND TECHNIQUES CONCERNING THE USE OF HIDDEN MARKOV MODELS FOR SEQUENTIAL DATA
Description
This A lot of machine learning concerns with creating statistical parameterized models of systems based on the data points that have been extracted from some underlying distribution depending upon the inductive bias. In probabilistic models, we model the joint probability of the input and the output or even the conditional probability of the output given the input (and vice versa using Bayes theorem).To elaborate, generative models determine the probability of the input given a particular output class if it is a classification task and discriminative models find the probability of the output class given the input. For a large class of models such as the standard ANNs, we assume the individual data points to be independent and identically distributed. However, when it comes to data such as rainfall measurements across successive days, there is an inherent sequential structure to the data which the previous assumption fails to take into consideration. Hidden Markov Models are used to model sequential data such as a time series where in for each sequence observed, the successive data points in that sequence are temporally related and are not independent, they are a part of a series of measurements and are evolving with respect to time-here, an independent variable. HMMs(Hidden Markov Models) are able to exploit this structure in the data and are used for a variety of tasks which are inherently sequential such as speech recognition, language modelling, time series etc. In this paper, we will understand and review the mathematics behind building and training an HMM, we will understand Latent Variables, Markov Chains, the forward backward algorithm and the Viterbi algorithm for HMMs. We will understand Lagrange Multipliers, the expectation maximization algorithm and see how it is applied for optimization in an HMM. We will also review some of the applications of an HMM. [3]
Notes
Files
02.AUCS10083.pdf
Files
(241.9 kB)
Name | Size | Download all |
---|---|---|
md5:a5b7b7cc0822d2a41f2571fa98c9fc87
|
241.9 kB | Preview Download |
Additional details
Related works
- Cites
- Journal article: 10.26562/irjcs.2020.v0708.002 (DOI)
- Is cited by
- Journal article: http://www.irjcs.com/volumes/Vol7/iss-8/02.AUCS10083.pdf (URL)
References
- 1. Christopher Bishop, Pattern Recognition and Machine Learning, Springer, 2006.
- 2. Daniel Jurafsky and James H.Martin ,Speech and Language Processing, Pearson Education, 2009.
- 3. Pietrzykowski, Marcin & Sałabun, Wojciech. (2014). Applications of Hidden Markov Model: state-of-the-art.
- 4. L. R.Rabiner, B. H. Juang .IEEE ASSP magazine.(1986).An Introduction to Hidden Markov Models.
- 5. Edwin Chang and Stanislaw Zak, An introduction to optimization, Wiley, 2013.
- 6. J Medhi, Stochastic Processes, New Age International Publishers, 2017.
Subjects
- scientific community
- http://www.irjcs.com/volumes/Vol7/iss-8/02.AUCS10083.pdf