Scroll Top
19th Ave New York, NY 95822, USA

数学代写|COMP5318 Machine Learning

MY-ASSIGNMENTEXPERT™可以为您提供sydney COMP5318 Machine Learning机器学习课程的代写代考辅导服务!

这是悉尼大学机器学习课程的代写成功案例。

数学代写|COMP5318 Machine Learning

COMP5318课程简介

Machine learning is the process of automatically building mathematical models that explain and generalise datasets. It integrates elements of statistics and algorithm development into the same discipline. Data mining is a discipline within knowledge discovery that seeks to facilitate the exploration and analysis of large quantities for data, by automatic and semiautomatic means. This subject provides a practical and technical introduction to machine learning and data mining. Topics to be covered include problems of discovering patterns in the data, classification, regression, feature extraction and data visualisation. Also covered are analysis, comparison and usage of various types of machine learning techniques and statistical techniques.

Prerequisites 

At the completion of this unit, you should be able to:

  • LO1. understand the basic principles, strengths, weaknesses and applicability of machine learning algorithms for solving classification, regression, clustering and reinforcement learning tasks.
  • LO2. have obtained practical experience in designing, implementing and evaluating machine learning algorithms
  • LO3. have gained practical experience in using machine learning software and libraries
  • LO4. present and interpret data and information in verbal and written form

COMP5318 Machine Learning HELP(EXAM HELP, ONLINE TUTOR)

问题 1.

Consider $\mathcal{U}$ with 6 examples
$$
\begin{array}{cc}
\mathbf{x} & y \
\hline(1,0), & +1 \
(3,2), & +1 \
(0,2), & +1 \
(2,3), & -1 \
(2,4), & -1 \
(3,5), & -1
\end{array}
$$
Run the process of choosing any three examples from $\mathcal{U}$ as $\mathcal{D}$, and learn a perceptron hypothesis (say, with PLA, or any of your “human learning” algorithm) to achieve $E_{\mathrm{in}}(g)=0$ on $\mathcal{D}$. Then, evaluate $g$ outside $\mathcal{D}$. What is the smallest and largest $E_{\text {ots }}(g)$ ? Choose the correct answer; explain your answer.
[a] $\left(0, \frac{1}{3}\right)$
[b] $\left(0, \frac{2}{3}\right)$
[c] $\left(\frac{2}{3}, 1\right)$
[d] $\left(\frac{1}{3}, 1\right)$
[e] $(0,1)$

问题 2.

Suppose you are given a biased coin with one side coming up with probability $\frac{1}{2}+\epsilon$. How many times do you need to toss the coin to find out the more probable side with probability at least $1-\delta$ using the Hoeffding’s Inequality mentioned in page 10 of lecture 4? Choose the correct answer; explain your answer. (Hint: There are multiple versions of Hoeffding’s inequality. Please use the version in the lecture, albeit slightly loose, for answering this question. The $\log$ here is $\log _e$.)
[a] $\frac{1}{2 \epsilon^2 \delta} \log 2$
[b] $\frac{1}{2 \epsilon^2} \log \frac{2}{\delta}$
[c] $\frac{1}{2 \epsilon} \log \frac{2}{\epsilon \delta}$
[d] $\frac{1}{2} \log \frac{2}{\epsilon^{2 \delta}}$
[e] $\log \frac{1}{\epsilon^2 \delta}$

问题 3.

Consider $\mathbf{x}=\left[x_1, x_2\right]^T \in \mathbb{R}^2$, a target function $f(\mathbf{x})=\operatorname{sign}\left(x_1\right)$, a hypothesis $h_1(\mathbf{x})=\operatorname{sign}\left(2 x_1-x_2\right)$, and another hypothesis $h_2(\mathbf{x})=\operatorname{sign}\left(x_2\right)$. When drawing 5 examples independently and uniformly within $[-1,+1] \times[-1,+1]$ as $\mathcal{D}$, what is the probability that we get 5 examples $\left(\mathbf{x}n, f\left(\mathbf{x}_n\right)\right)$ such that $E{\text {in }}\left(h_2\right)=0$ ? Choose the correct answer; explain your answer. (Note: This is one of the $B A D$-data cases for $h_2$ where $E_{\text {in }}\left(h_2\right)$ is far from $E_{\text {out }}\left(h_2\right)$.)
[a] 0
[b] $\frac{1}{5}$
[c] $\frac{1}{32}$
[d] $\frac{1}{1024}$
[e] 1

问题 4.

Following the setting of the previous problem, what is the probability that we get 5 examples such that $E_{\text {in }}\left(h_2\right)=E_{\text {in }}\left(h_1\right)$, including both the zero and non-zero $E_{\text {in }}$ cases? Choose the correct answer; explain your answer. (Note: This is one of the BAD-data cases where we cannot distinguish the better- $E_{\text {out }}$ hypothesis $h_1$ from the worse hypothesis $h_2$.)
[a] $\frac{243}{32768}$
[b] $\frac{1440}{32768}$
[c] $\frac{2160}{32768}$
[d] $\frac{3843}{32768}$
$[\mathrm{e}] \frac{7776}{32768}$

问题 5.

According to page 22 of lecture 4 , for a hypothesis set $\mathcal{H}$,
$$
\text { BAD } \mathcal{D} \text { for } \mathcal{H} \Longleftrightarrow \exists h \in \mathcal{H} \text { s.t. }\left|E_{\text {out }}(h)-E_{\text {in }}(h)\right|>\epsilon .
$$
Let $\mathbf{x}=\left[x_1, x_2, \cdots, x_d\right]^T \in \mathbb{R}^d$ with $d>1$. Consider a binary classification target with $\mathcal{Y}=$ ${+1,-1}$ and a hypothesis set $\mathcal{H}$ with $2 d$ hypotheses $h_1, \cdots, h_{2 d}$.
For $i=1, \cdots, d, h_i(\mathbf{x})=\operatorname{sign}\left(x_i\right)$.
For $i=d+1, \cdots, 2 d, h_i(\mathbf{x})=-\operatorname{sign}\left(x_{i-d}\right)$.
Extend the Hoeffding’s Inequality mentioned in page 10 of lecture 4 with a proper union bound. Then, for any given $N$ and $\epsilon$, what is the smallest $C$ that makes this inequality true?
$$
\mathbb{P}[\mathrm{BAD} \mathcal{D} \text { for } \mathcal{H}] \leq C \cdot 2 \exp \left(-2 \epsilon^2 N\right)
$$
Choose the correct answer; explain your answer.
[a] $C=1$
[b] $C=d$
[c] $C=2 d$
[d] $C=4 d$
[e] $C=\infty$

数学代写|COMP5318 Machine Learning

MY-ASSIGNMENTEXPERT™可以为您提供SYDNEY COMP5318 MACHINE LEARNING机器学习课程的代写代考和辅导服务!

Related Posts

Leave a comment