Markov models are important mathematical models for random processes and are used in many scientific disciplines. They act as tools in communications, genetics, systems biology, speech communication, and machine learning. In some of these application areas, however, the resulting models are too large to be useful. On the one hand it might become impossible to simulate a given model efficiently, on the other hand the available data might not suffice to determine the model parameters. Hence, in these scientific disciplines it is necessary to simplify the mathematical models. The goal of this research stay is to aggregate Markov models with information-theoretic methods. Aggregation means that states of the original model are grouped together to a single state – in speech communication, for example, the words “research”, “apply”, and “hope” could be grouped together to “verbs”. While the simplification of Markov models became increasingly popular in the last few years, information-theoretic methods are still used sparingly. But what are information-theoretic methods? Information theory deals with the transmission of information and its mathematical fundamentals. To aggregate Markov models with information-theoretic methods means, loosely speaking, to simplify the model while preserving as much information as possible. We will not only develop theoretical results during the research stay, but we will also design algorithms which simplify a given Markov model. The application areas of such algorithms are manifold: In concrete terms, we hope to find a way to simplify the Viterbi algorithm, an important algorithm in communications. Aside from that we will apply our methods in speech communication (especially during the return phase in Austria).
|Effective start/end date||9/11/15 → 15/06/18|
Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.