19th Ave New York, NY 95822, USA

# 计算机代写|机器学习代写Machine Learning代考|Hidden Markov Models

my-assignmentexpert™提供最专业的一站式服务：Essay代写，Dissertation代写，Assignment代写，Paper代写，Proposal代写，Proposal代写，Literature Review代写，Online Course，Exam代考等等。my-assignmentexpert™专注为留学生提供Essay代写服务，拥有各个专业的博硕教师团队帮您代写，免费修改及辅导，保证成果完成的效率和质量。同时有多家检测平台帐号，包括Turnitin高级账户，检测论文不会留痕，写好后检测修改，放心可靠，经得起任何考验！

## 计算机代写|机器学习代写Machine Learning代考|Markov Models

Markov models are time series that have the Markov property:
$$P\left(s_t \mid s_{t-1}, s_{n-2}, \ldots, s_1\right)=P\left(s_t \mid s_{t-1}\right)$$
where $s_t$ is the state of the system at time $t$. Intuitively, this property says the probability of a state at time $t$ is competely determined by the system state at the previous time step. More generally, for any set $A$ of indices less than $t$ and set of indices $B$ greater than $t$ we have:
$$P\left(s_t \mid\left{s_i\right}_{i \in A \cup B}\right)=P\left(s_t \mid s_{\max (A)}, s_{\min (B)}\right)$$
which follows from the Markov property.

Another useful identity which also follows directly from the Markov property is:
$$P\left(s_{t-1}, s_{t+1} \mid s_t\right)=P\left(s_{t-1} \mid s_t\right) P\left(s_{t+1} \mid s_t\right)$$
Discrete Markov Models. A important example of Markov chains are discrete Markov models. Each state $s_t$ can take on one of a discrete set of states, and the probability of transitioning from one state to another is governed by a probability table for the whole sequence of states. More concretely, $s_t \in{1, \ldots, K}$ for some finite $K$ and, for all times $t, P\left(s_t=j \mid s_{t-1}=i\right)=A_{i j}$ where $A$ is parameter of the model that is a fixed matrix of valid probabilities (so that $A_{i j} \geq 0$ and $\sum_{j=1}^K A_{i j}=1$ ). To fully characterize the model, we also require a distribution over states for the first time-step: $P\left(s_1=i\right)=a_i$.

## 计算机代写|机器学习代写Machine Learning代考|Hidden Markov Models

A Hidden Markov model (HMM) models a time-series of observations $\mathbf{y}{1: T}$ as being determined by a “hidden” discrete Markov chain $s{1: T}$. In particular, the measurement $\mathbf{y}_t$ is assumed to be determined by an “emission” distribution that depends on the hidden state at time $t: p\left(\mathbf{y}_t \mid s_t=i\right)$. The Markov chain is called “hidden” because we do not measure it, but must reason about it indirectly. Typically, $s_t$ encodes underlying structure of the time-series, where as the $\mathbf{y}_t$ correspond to the measurements that are actually observed. For example, in speech modeling applications, the measurements y might be the waveforms measured from a microphone, and the hidden states might be the corresponding word that the speaker is uttering. In language modeling, the measurements might be discrete words, and the hidden states their underlying parts of speech.

HMMs can be used for discrete or continuous data; in this course, we will focus solely on the continuous case, with Gaussian emission distributions.
The joint distribution over observed and hidden is:
$$p\left(s_{1: T}, \mathbf{y}{1: T}\right)=p\left(\mathbf{y}{1: T} \mid s_{1: T}\right) P\left(s_{1: T}\right)$$
where
$$P\left(s_{1: T}\right)=P\left(s_1\right) \prod_{t=2}^T P\left(s_t \mid s_{t-1}\right)$$
and
$$P\left(\mathbf{y}{1: T} \mid s{1: T}\right)=\prod_{t=1}^T p\left(\mathbf{y}t \mid s_t\right)$$ The Gaussian model says: $$p\left(\mathbf{y}_t \mid s_t=i\right)=\mathcal{N}\left(\mathbf{y}_t ; \boldsymbol{\mu}_i, \boldsymbol{\Sigma}_i\right)$$ for some mean and covariance parameters $\mu_i$ and $\Sigma_i$. In other words, each state $i$ has its own Gaussian with its own parameters. A complete HMM consists of the following parameters: $a, A, \boldsymbol{\mu}{1: K}$,and $\boldsymbol{\Sigma}{1: K}$. As a short-hand, we will denote these parameters by a variable $\theta=\left{a, A, \boldsymbol{\mu}{1: K}, \boldsymbol{\Sigma}{1: K}\right}$. Note that, if $A{i j}=a_j$ for all $i$, then this model is equivalent to a Mixtures-of-Gaussian model with mixing proportions given by the $a_i$ ‘s, since the distribution over states at any instant does not depend on the previous state.
In the remainder of this chapter, we will discuss algorithms for computing with HMMs.

## 计算机代写|机器学习代写MACHINE LEARNING代考|MARKOV MODELS

$$P\left(s_t \mid s_{t-1}, s_{n-2}, \ldots, s_1\right)=P\left(s_t \mid s_{t-1}\right)$$

$$P\left(s_{t-1}, s_{t+1} \mid s_t\right)=P\left(s_{t-1} \mid s_t\right) P\left(s_{t+1} \mid s_t\right)$$

P\left(s_1=i\right)=a_i \text {. }
$$## 计算机代写|机器学习代写MACHINE LEARNING代考|HIDDEN MARKOV MODELS 隐马尔可夫模型 H M M 对观察的时间序列建模 \mathbf{y} 1: T 由“隐藏的”离散马尔可夫链决定 s 1: T. 特别地，测量 \mathbf{y}t 假设由取决于隐藏状态的“发射”分布 决定 t: p\left(\mathbf{y}_t \mid s_t=i\right). 马尔可夫链被称为“隐藏的”，因为我们不测量它，但必须间接地推理它。通常， s_t 编码时间序列的底层结构，其中 \mathbf{y}_t 对应 于实际观察到的测量值。例如，在语音建模应用程序中，测量值 y 可能是从麦克风侧量的波形，而隐藏状态可能是说话者正在说的相应词。在语 言建模中，测量值可能是离散的词，隐藏状态是它们的潜在词性。 HMM 可用于离散或连续数据；在本课程中，我们将只关注具有高斯发射分布的连续情况。 观察和隐藏的联合分布是:$$ p\left(s{1: T}, \mathbf{y} 1: T\right)=p\left(\mathbf{y} 1: T \mid s_{1: T}\right) P\left(s_{1: T}\right)
$$在哪里$$
P\left(s_{1: T}\right)=P\left(s_1\right) \prod_{t=2}^T P\left(s_t \mid s_{t-1}\right)

P(\mathbf{y} 1: T \mid s 1: T)=\prod_{t=1}^T p\left(\mathbf{y} t \mid s_t\right)
$$高斯模型说:$$
p\left(\mathbf{y}_t \mid s_t=i\right)=\mathcal{N}\left(\mathbf{y}_t ; \boldsymbol{\mu}_i, \boldsymbol{\Sigma}_i\right)


## Matlab代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。