Scroll Top
19th Ave New York, NY 95822, USA

数学代写|QBUS6850 Machine Learning

MY-ASSIGNMENTEXPERT™可以为您提供sydney QBUS6850 Machine Learning机器学习课程的代写代考辅导服务!

这是悉尼大学机器学习课程的代写成功案例。

数学代写|COMP5318 Machine Learning

QBUS6850课程简介

Machine Learning is a fundamental aspect of data analytics that automates analytical model building in modern business. In the big data era, managers are able to use very large and rich data sources and to make business decisions based on quantitative data analysis. Machine Learning covers a range of state-of-the-art methods/algorithms that iteratively learn from data, allowing computers to find hidden patterns and relationships in such data so as to support business decisions. This unit introduces modern machine learning techniques and builds skills in using data for everyday business decision making. Topics include: Machine Learning Foundation; Modern Regression Methods; Advanced Classification Techniques; Latent Variable Models; Support Vector Machines (SVM) and Kernel Methods; Artificial Neural Networks; Deep Learning; and Machine Learning for Big Data. Emphasis is placed on applications involving the analysis of business data. Students will practise applying machine learning algorithms to real-world datasets by using an appropriate computing package.

Prerequisites 

At the completion of this unit, you should be able to:

  • LO1. differentiate different types of learning algorithms and identify the advantages and limitations of each method
  • LO2. build a strong machine learning skill set for business decision making
  • LO3. create machine learning models for studying relationship amongst business variables
  • LO4. work with various data sets and identify problems within real-world constraints
  • LO5. demonstrate proficiency in the use of statistical software, e.g. Python, for machine learning model implementation
  • LO6. work productively and collaboratively in a team
  • LO7. present and write insights and suggestions effectively, professionally and ethically.

QBUS6850 Machine Learning HELP(EXAM HELP, ONLINE TUTOR)

问题 1.

We then illustrate what happens with multiple-bin sampling with an experiment that use a dice (instead of a marble) to bind the six faces together. Please note that the dice is not meant to be thrown for random experiments. The probability below only refers to drawing the dices from the bag. Try to view each number as a hypothesis, and each dice as an example in our multiple-bin scenario. You can see that no single number is always green – that is, $E_{\text {out }}$ of each hypothesis is always non-zero. In the next two problems, we are essentially asking you to calculate the probability of getting $E_{\text {in }}\left(h_3\right)=0$, and the probability of the minimum $E_{\text {in }}\left(h_i\right)=0$.
Consider four kinds of dice in a bag, with the same (super large) quantity for each kind.

  • A: all even numbers are colored green, all odd numbers are colored orange
  • B: $(2,3,4)$ are colored green, others are colored orange
  • C: the number 6 is colored green, all other numbers are colored orange
  • D: all primes are colored green, others are colored orange
    If we draw 5 dices independently from the bag, which combination is of the same probability as getting five green 3’s? Choose the correct answer; explain your answer.
    [a] five green 1’s
    [b] five orange 2’s
    [c] five green 2’s
    [d] five green 4’s
    [e] five green 5’s

问题 2.

Following the previous problem, if we draw 5 dices independently from the bag, what is the probability that we get some number that is purely green? Choose the correct answer; explain your answer.
[a] $\frac{512}{1024}$
[b] $\frac{333}{1024}$
[c] $\frac{274}{1024}$
[d] $\frac{243}{1024}$
[e] $\frac{32}{1024}$

问题 3.

$(*)$ Please first follow page 4 of lecture 2 , and add $x_0=1$ to every $\mathbf{x}n$. Implement a version of PLA that randomly picks an example $\left(\mathbf{x}_n, y_n\right)$ in every iteration, and updates $\mathbf{w}_t$ if and only if $\mathbf{w}_t$ is incorrect on the example. Note that the random picking can be simply implemented with replacement – that is, the same example can be picked multiple times, even consecutively. Stop updating and return $\mathbf{w}_t$ as $\mathbf{w}{\text {PLA }}$ if $\mathbf{w}_t$ is correct consecutively after checking $5 N$ randomly-picked examples.

Hint: (1) The update procedure described above is equivalent to the procedure of gathering all the incorrect examples first and then randomly picking an example among the incorrect ones. But the description above is usually much easier to implement. (2) The stopping criterion above is a randomized, more efficient implementation of checking whether $\mathbf{w}_t$ makes no mistakes on the data set.

Repeat your experiment for 1000 times, each with a different random seed. What is the median number of updates before the algorithm returns $\mathbf{w}_{\mathrm{PLA}}$ ? Choose the closest value.
[a] 8
[b] 11
[c] 14
[d] 17
[e] 20

问题 4.

$(*)$ Among all the $w_0$ (the zero-th component of $\mathbf{w}_{\mathrm{PLA}}$ ) obtained from the 1000 experiments above, what is the median? Choose the closest value.
[a] -10
[b] -5
[c] 0
[d] 5
[e] 10

数学代写|COMP5318 Machine Learning

MY-ASSIGNMENTEXPERT™可以为您提供SYDNEY COMP5318 MACHINE LEARNING机器学习课程的代写代考和辅导服务!

Related Posts

Leave a comment