# 计算机代写|机器学习代写Machine Learning代考|K-means Clustering

my-assignmentexpert™提供最专业的一站式服务：Essay代写，Dissertation代写，Assignment代写，Paper代写，Proposal代写，Proposal代写，Literature Review代写，Online Course，Exam代考等等。my-assignmentexpert™专注为留学生提供Essay代写服务，拥有各个专业的博硕教师团队帮您代写，免费修改及辅导，保证成果完成的效率和质量。同时有多家检测平台帐号，包括Turnitin高级账户，检测论文不会留痕，写好后检测修改，放心可靠，经得起任何考验！

## 计算机代写|机器学习代写Machine Learning代考|$K$-means Clustering

We begin with a simple method called $K$-means. Given $N$ input data vectors $\left{\mathbf{y}i\right}{i=1}^N$, we wish to label each vector as belonging to one of $K$ clusters. This labeling will be done via a binary matrix $\mathrm{L}$, the elements of which are given by
$$L_{i, j}= \begin{cases}1 & \text { if data point } i \text { belongs to cluster } j \ 0 & \text { otherwise }\end{cases}$$
The clustering is mutually exclusive. Each data vector $i$ can only be assigned to only cluster: $\sum_{j=1}^K L_{i, j}=1$. Along the way, we will also be estimating a center $\mathbf{c}j$ for each cluster. The full objective function for $K$-means clustering is: $$E(\mathbf{c}, \mathbf{L})=\sum{i, j} L_{i, j}\left|\mathbf{y}_i-\mathbf{c}_j\right|^2$$
This objective function penalizes the distance between each data point and the center of the cluster to which it is assigned. Hence, to minimize this error, we want to bring the cluster centers close to the data it has been assigned, and we also want to assign the data to nearby centers.

This objective function cannot be optimized in closed-form, and so an iterative method is required. It includes discrete variables (the labels L), and so gradient-based methods aren’t directly applicable. Instead, we use a strategy called coordinate descent, in which we alternate between closed-form optimization of one set of variables holding the other variables fixed. That is, we first pick initial values, then we alternate between updating the labels for the current centers, and then updating the centers for the current labels.

## 计算机代写|机器学习代写Machine Learning代考|K-medoids Clustering

(The material in this section is not required for this course.)
$K$-medoids clustering is a variant of $K$-means with the additional constraint that the cluster centers must be drawn from the data. The following algorithm, called Farthest First Traversal, or Hochbaum-Shmoys, is simple and effective:
Randomly select a data point $\mathbf{y}i$ as the first cluster center: $\mathbf{c}_1 \leftarrow \mathbf{y}_i$ for $j=2$ to $K$ $\quad$ Find the data point furthest from all existing centers: $\quad i \leftarrow \arg \max _i \min {k<j}\left|\mathbf{y}_i-\mathbf{c}_k\right|^2$ $\quad \mathbf{c}_j \leftarrow \mathbf{y}_i$ end for Label all remaining data points according to their nearest centers (as in $k$-means)
This algorithm provides a quality guarantee: it gives a clustering that is no worse than twice the error of the optimal clustering.
$K$-medoids clustering can also be improved by coordinate descent. The labeling step is the same as in $K$-means. However, the cluster updates must be done by brute-force search for each candidate cluster center update.

The Mixtures-of-Gaussians (MoG) model is a generalization of $K$-means clustering. Whereas $K$ means clustering works for clusters that are more or less spherical, the MoG model can handle oblong clusters and overlapping clusters. The $K$-means algorithm does an excellent job when clusters are well separated, but not when the clusters overlap. MoG algorithms compute a “soft,” probabilistic clustering which allows the algorithm to better handle overlapping clusters. Finally, the MoG model is probabilistic, and so it can be used to learn probability distributions from data.
The MoG model consists of $K$ Gaussian distributions, each with their own means and covariances $\left{\left(\mu_j, \mathbf{K}j\right)\right}$. Each Gaussian also has an associated (prior) probability $a_j$, such that $\sum_j a_j=1$. That is, the probabilities $a_j$ will represent the fraction of the data that are assigned to (or generated by) the different Gaussian components. As a shorthand, we will write all the model parameters with a single variable, i.e., $\theta=\left{a{1: K}, \mu_{1: K}, \mathbf{K}_{1: K}\right}$. When used for clustering, the idea is that each Gaussian component in the mixture should correspond to a single cluster.

The complete probabilistic model comprises the prior probabilities of each Gaussian component, and Gaussian likelihood over the data (or feature) space for each component:
\begin{aligned} P(L=j \mid \theta) & =a_j \ p(\mathbf{y} \mid \theta, L=j) & =G\left(\mathbf{y} ; \mu_j, \mathbf{K}_j\right) \end{aligned}

## 计算机代写|机器学习代写MACHINE LEARNING代 考|MARKOV CHAIN MONTE CARLO

MCMC 是一种非常通用的算法，用于从任何分布中抽样。例如，采样模型没有简单的方法w来自后验分布，特殊情况除外
e.g., whentheposteriorisGaussian.
$\mathrm{MCMC}$ 是一种迭代算法，给定样本 $\mathbf{x} t \sim p(\mathbf{x})$, 修改该样本以产生新样本 $\mathbf{x} t+1 \sim p(\mathbf{x})$. 此修改是使用提案分布完成的 $q\left(\mathbf{x}^{\prime} \mid \mathbf{x}\right)$, 给定一个这, 随 机选择一个“变异”到x. 这个提议分布几乎可以是任何东西，选择这个分布取决于算法的用户; 一个常见的选择就是简单地以高斯为中心 $\mathbf{x}: q\left(\mathbf{x}^{\prime} \mid \mathbf{x}\right)=\mathcal{N}\left(\mathbf{x}^{\prime} \mid \mathbf{x}, \sigma^2 \mathbf{I}\right)$

## 计算机代写|机器学习代写MACHINE LEARNING代 考|PRINCIPAL COMPONENTS ANALYSIS

PCA主要是一个处理高维数据的工具。如果我们的测量是 17 维、 30 维或 10,000 维，则处理数据可能会非常困难。很多时候，实际数据可以通过捕 获所有数据结构的低维表示来描述。PCA 可能是找到这种表示的最简单方法，但它也非常快速和有效，因此被广泛使用。 PCA 可以通过多种方式提供帮助：

• 可视化：PCA提供了一种可视化数据的方法，通过将数据向下投影到您可以绘制的两个或三个维度，以便更好地了解数据。此外，主成分向 量有时也提供对数据性质的洞察力。
• 预处理: 学习高维数据的复杂模型通常很慢，而且容易过拟合一一模型中参数的数量通常是维数的指数，这意味着更高维模型需要非常大的 数据集. 这个问题通常被称为维数灾难。PCA 可用于首先将数据映射到低维表示，然后再对其应用更复杂的算法。使用 PCA，还可以使表示变 白，从而重新平衡数据的权重以在某些情况下提供更好的性能。
• 建模：PCA 学习有时用作整个模型的表示，例如，新数据的先验分布。
• 压缩：PCA 可用于压缩数据，方法是用低维表示替换数据。

## Matlab代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。