Scroll Top
19th Ave New York, NY 95822, USA

数学代写|COMP5318 Machine Learning

MY-ASSIGNMENTEXPERT™可以为您提供sydney COMP5318 Machine Learning机器学习课程的代写代考辅导服务!

这是悉尼大学机器学习课程的代写成功案例。

数学代写|COMP5318 Machine Learning

COMP5318课程简介

Machine learning is the process of automatically building mathematical models that explain and generalise datasets. It integrates elements of statistics and algorithm development into the same discipline. Data mining is a discipline within knowledge discovery that seeks to facilitate the exploration and analysis of large quantities for data, by automatic and semiautomatic means. This subject provides a practical and technical introduction to machine learning and data mining. Topics to be covered include problems of discovering patterns in the data, classification, regression, feature extraction and data visualisation. Also covered are analysis, comparison and usage of various types of machine learning techniques and statistical techniques.

Prerequisites 

At the completion of this unit, you should be able to:

  • LO1. understand the basic principles, strengths, weaknesses and applicability of machine learning algorithms for solving classification, regression, clustering and reinforcement learning tasks.
  • LO2. have obtained practical experience in designing, implementing and evaluating machine learning algorithms
  • LO3. have gained practical experience in using machine learning software and libraries
  • LO4. present and interpret data and information in verbal and written form

COMP5318 Machine Learning HELP(EXAM HELP, ONLINE TUTOR)

问题 1.

foundations: optimization
For some given $A>0, B>0$, solve
$$
\min _\alpha A e^\alpha+B e^{-2 \alpha}
$$

问题 2.

foundations: vector calculus
Let $\mathbf{w}$ be a vector in $R^d$ and $E(\mathbf{w})=\frac{1}{2} \mathbf{w}^T \mathrm{~A} \mathbf{w}+\mathbf{b}^T \mathbf{w}$ for some symmetric matrix $\mathrm{A}$ and vector $\mathbf{b}$. Prove that the gradient $\nabla E(\mathbf{w})=\mathrm{A} \mathbf{w}+\mathbf{b}$ and the Hessian $\nabla^2 E(\mathbf{w})=\mathrm{A}$.

问题 3.

foundations: quadratic programming
Following the previous question, if A is not only symmetric but also positive definite (PD), prove that the solution of $\operatorname{argmin}_{\mathbf{w}} E(\mathbf{w})$ is $-\mathrm{A}^{-1} \mathbf{b}$.

问题 4.

(techniques: optimization with linear constraint)
Consider
$$
\min _{w_1, w_2, w_3} \frac{1}{2}\left(w_1^2+2 w_2^2+3 w_3^2\right) \text { subject to } w_1+w_2+w_3=11 .
$$
Refresh your memory on “Lagrange multipliers” and show that the optimal solution must happen on $w_1=\lambda, 2 w_2=\lambda, 3 w_3=\lambda$. Use the property to solve the problem.

问题 5.

techniques: optimization with linear constraints
Let $\mathbf{w}$ be a vector in $R^d$ and $E(\mathbf{w})$ be a convex differentiable function of $\mathbf{w}$. Prove that the optimal solution to
$$
\min _{\mathbf{w}} E(\mathbf{w}) \text { subject to } A \mathbf{w}+\mathbf{b}=0 .
$$
must happen at $\nabla E(\mathbf{w})+\boldsymbol{\lambda}^T \mathrm{~A}=\mathbf{0}$ for some vector $\boldsymbol{\lambda}$. (Hint: If not, let $\mathbf{u}$ be the residual when projecting $\nabla E(\mathbf{w})$ to the span of the rows of $\mathrm{A}$. Show that for some very small $\eta, \mathbf{w}-\eta \cdot \mathbf{u}$ is a feasible solution that improves $E$.)

问题 6.

Which of the following problem is suited for machine learning if there is assumed to be enough associated data? Choose the correct answer; explain how you can possibly use machine learning to solve it.
[a] predicting the winning number of the next invoice lottery
[b] calculating the average score of 500 students
[c] identifying the exact minimal spanning tree of a graph
[d] ranking mango images by the quality of the mangoes
[e] none of the other choices

数学代写|COMP5318 Machine Learning

MY-ASSIGNMENTEXPERT™可以为您提供SYDNEY COMP5318 MACHINE LEARNING机器学习课程的代写代考和辅导服务!

Related Posts

Leave a comment