Scroll Top
19th Ave New York, NY 95822, USA

CS代写|PCS810 Complex Networks

MY-ASSIGNMENTEXPERT™可以为您提供torontomu PCS810 Complex Networks复杂网络课程的代写代考辅导服务!

这是怀雅逊大学 复杂网络课程的代写成功案例。

CS代写|PCS810 Complex Networks

PCS810课程简介

An introduction to the emerging science of networks, with applications to biology, social science, engineering, and other fields. Students will learn about the field’s origins in graph theory, and the surprising properties of real-world networks such as the small-world effect. They will also learn to analyze the rich structure present in networks through degree correlations, communities, and motifs. Finally, it will discuss how networks shape the spread of large-scale failures like power blackouts and epidemics.

Prerequisites 

Most social, biological, and technological networks display substantial non-trivial topological features, with patterns of connection between their elements that are neither purely regular nor purely random. Such features include a heavy tail in the degree distribution, a high clustering coefficient, assortativity or disassortativity among vertices, community structure, and hierarchical structure.

PCS810 Complex Networks HELP(EXAM HELP, ONLINE TUTOR)

问题 1.

Coin flips. A fair coin is flipped until the first head occurs. Let $X$ denote the number of flips required.
(a) Find the entropy $H(X)$ in bits. The following expressions may be useful:
$$
\sum_{n=1}^{\infty} r^n=\frac{r}{1-r}, \quad \sum_{n=1}^{\infty} n r^n=\frac{r}{(1-r)^2} .
$$
(b) A random variable $X$ is drawn according to this distribution. Find an “efficient” sequence of yes-no questions of the form, “Is $X$ contained in the set $S$ ?” Compare $H(X)$ to the expected number of questions required to determine $X$.

(a) The number $X$ of tosses till the first head appears has the geometric distribution with parameter $p=1 / 2$, where $P(X=n)=p q^{n-1}, n \in{1,2, \ldots}$. Hence the entropy of $X$ is
$$
\begin{aligned}
H(X) & =-\sum_{n=1}^{\infty} p q^{n-1} \log \left(p q^{n-1}\right) \
& =-\left[\sum_{n=0}^{\infty} p q^n \log p+\sum_{n=0}^{\infty} n p q^n \log q\right] \
& =\frac{-p \log p}{1-q}-\frac{p q \log q}{p^2} \
& =\frac{-p \log p-q \log q}{p} \
& =H(p) / p \text { bits. }
\end{aligned}
$$
If $p=1 / 2$, then $H(X)=2$ bits.

(b) Intuitively, it seems clear that the best questions are those that have equally likely chances of receiving a yes or a no answer. Consequently, one possible guess is that the most “efficient” series of questions is: Is $X=1$ ? If not, is $X=2$ ? If not, is $X=3$ ? … with a resulting expected number of questions equal to $\sum_{n=1}^{\infty} n\left(1 / 2^n\right)=2$. This should reinforce the intuition that $H(X)$ is a measure of the uncertainty of $X$. Indeed in this case, the entropy is exactly the same as the average number of questions needed to define $X$, and in general $E$ (# of questions) $\geq H(X)$. This problem has an interpretation as a source coding problem. Let $0=$ no, $1=$ yes, $X=$ Source, and $Y=$ Encoded Source. Then the set of questions in the above procedure can be written as a collection of $(X, Y)$ pairs: $(1,1),(2,01),(3,001)$, etc. . In fact, this intuitively derived code is the optimal (Huffman) code minimizing the expected number of questions.

问题 2.

Entropy of functions. Let $X$ be a random variable taking on a finite number of values. What is the (general) inequality relationship of $H(X)$ and $H(Y)$ if
(a) $Y=2^X$ ?
(b) $Y=\cos X$ ?

Solution: Let $y=g(x)$. Then
$$
p(y)=\sum_{x: y=g(x)} p(x) .
$$
Consider any set of $x$ ‘s that map onto a single $y$. For this set
$$
\sum_{x: y=g(x)} p(x) \log p(x) \leq \sum_{x: y=g(x)} p(x) \log p(y)=p(y) \log p(y),
$$
since $\log$ is a monotone increasing function and $p(x) \leq \sum_{x: y=q(x)} p(x)=p(y)$. Extending this argument to the entire range of $X$ (and $Y$ ), we obtain
$$
\begin{aligned}
H(X) & =-\sum_x p(x) \log p(x) \
& =-\sum_y \sum_{x: y=g(x)} p(x) \log p(x) \
& \geq-\sum_y p(y) \log p(y) \
& =H(Y),
\end{aligned}
$$
with equality iff $g$ is one-to-one with probability one.

(a) $Y=2^X$ is one-to-one and hence the entropy, which is just a function of the probabilities (and not the values of a random variable) does not change, i.e., $H(X)=H(Y)$.
(b) $Y=\cos (X)$ is not necessarily one-to-one. Hence all that we can say is that $H(X) \geq H(Y)$, with equality if cosine is one-to-one on the range of $X$.

问题 3.

Zero conditional entropy. Show that if $H(Y \mid X)=0$, then $Y$ is a function of $X$, i.e., for all $x$ with $p(x)>0$, there is only one possible value of $y$ with $p(x, y)>0$.

Solution: Zero Conditional Entropy. Assume that there exists an $x$, say $x_0$ and two different values of $y$, say $y_1$ and $y_2$ such that $p\left(x_0, y_1\right)>0$ and $p\left(x_0, y_2\right)>0$. Then $p\left(x_0\right) \geq p\left(x_0, y_1\right)+p\left(x_0, y_2\right)>0$, and $p\left(y_1 \mid x_0\right)$ and $p\left(y_2 \mid x_0\right)$ are not equal to 0 or 1 . Thus
$$
\begin{aligned}
H(Y \mid X) & =-\sum_x p(x) \sum_y p(y \mid x) \log p(y \mid x) \
& \geq p\left(x_0\right)\left(-p\left(y_1 \mid x_0\right) \log p\left(y_1 \mid x_0\right)-p\left(y_2 \mid x_0\right) \log p\left(y_2 \mid x_0\right)\right) \

& >0
\end{aligned}
$$
since $-t \log t \geq 0$ for $0 \leq t \leq 1$, and is strictly positive for $t$ not equal to 0 or 1 . Therefore the conditional entropy $H(Y \mid X)$ is 0 if and only if $Y$ is a function of $X$.

数学代写|CS6917 Complex Networks

MY-ASSIGNMENTEXPERT™可以为您提供UNIVERSITY OF ILLINOIS URBANA-CHAMPAIGN MATH2940 linear algebra线性代数课程的代写代考和辅导服务! 请认准MY-ASSIGNMENTEXPERT™. MY-ASSIGNMENTEXPERT™为您的留学生涯保驾护航。

Related Posts

Leave a comment