# 数学代写|信息论代写Information Theory代考|ELEC7604

my-assignmentexpert™信息论information theory代写，免费提交作业要求， 满意后付款，成绩80\%以下全额退款，安全省心无顾虑。专业硕 博写手团队，所有订单可靠准时，保证 100% 原创。my-assignmentexpert， 最高质量的信息论information theory作业代写，服务覆盖北美、欧洲、澳洲等 国家。 在代写价格方面，考虑到同学们的经济条件，在保障代写质量的前提下，我们为客户提供最合理的价格。 由于统计Statistics作业种类很多，同时其中的大部分作业在字数上都没有具体要求，因此信息论information theory作业代写的价格不固定。通常在经济学专家查看完作业要求之后会给出报价。作业难度和截止日期对价格也有很大的影响。

my-assignmentexpert™ 为您的留学生涯保驾护航 在澳洲代写方面已经树立了自己的口碑, 保证靠谱, 高质且原创的澳洲代写服务。我们的专家在信息论information theory代写方面经验极为丰富，各种信息论information theory相关的作业也就用不着 说。

## 数学代写|信息论代写Information Theory代考|First Step: The Locational SMI of a Particle in a $1 D$ Box of Length $\mathrm{L}$

Figure 2.1 a shows a particle confined to a one-dimensional (1D) “box” of length $L$. The corresponding continuous SMI is:
$$H[f(x)]=-\int f(x) \log f(x) d x$$
Note that in Eq. (2.1), the SMI (denote $H$ ) is viewed as a functional of the function $f(x)$, where $f(x) d x$ is the probability of finding the particle in an interval between $x$ an $x+d x$.

Next, calculate the specific density distribution which maximizes the locational SMI, in (2.1). It is easy to show that the result is (see reference [1]):
$$f_{e q}(x)=\frac{1}{L}$$
Since we know that the probability of finding the particle at any interval is $1 / \mathrm{L}$, we may identify the distribution which maximizes the SMI as the equilibrium (eq.) distribution. The justification for this is explained in details in Ben-Naim [1, 4]. From (2.2) in (2.1) we obtain the maximum value of the SMI over all possible locational distribution:
$$H(\text { locations in } 1 D)=\log L$$
Next we admit that the location of the particle cannot be determined with absolute accuracy; there exists a small interval $h_x$ within which we do not care where the particle is. Therefore, we must correct Eq. (2.3) by subtracting $\log h_x$. Thus, we write instead of (2.3), the modified $H$ (locations in 1D) as:
$$H\left(\text { locations in 1D) }=\log L-\log h_x\right.$$
In the last equation we effectively defined $H$ (locations in 1D) for the finite number of intervals $n=L / h$. The passage from the infinite to the finite case is shown in Fig. 2.1b. Note that when $h_x \rightarrow 0, H$ (locations in 1D) diverges to infinity. Here, we do not take the strict mathematical limit, but we stop at $h_x$ which is small enough, but not zero. Note also that the ratio of $L$ and $h_x$ is a pure number. Hence we do not need to specify the units of either $L$ or $h_x$.

## 数学代写|信息论代写Information Theory代考|Second Step: The Velocity SMI of a Particle in a $1 D$ “Box” of Length $\mathrm{L}$

In the second step we calculate the probability distribution that maximizes the (continuous) SMI, subject to two conditions:
$$\begin{gathered} \int_{-\infty}^{\infty} f(x) d x=1 \ \int_{-\infty}^{\infty} x^2 f(x) d x=\sigma^2=\text { constant } \end{gathered}$$
In his original paper, Shannon [5] proved that the function $f(x)$ which maximize the SMI in (2.1), subject to the two conditions (2.5) and (2.6), is the Normal distribution, i.e.:
$$f_{e q}(x)=\frac{\exp \left[-x^2 / 2 \sigma^2\right]}{\sqrt{2 \pi \sigma^2}}$$
Note again that we use the subscript eq. for equilibrium. Applying this result to a classical particle having average kinetic energy $\frac{\left.m<v_x^2\right\rangle}{2}$, and using the relationship between the standard deviation $\sigma^2$ and the temperature of the system:

$$\sigma^2=\frac{k_B T}{m}$$
we obtain the equilibrium velocity distribution of one particle in a $1 \mathrm{D}$ system. This is shown in Fig. 2.2:
$$f_{e q}\left(v_x\right)=\sqrt{\frac{m}{2 \pi k_B T}} \exp \left[\frac{-m v_x^2}{2 k_B T}\right]$$
Here, $k_B$ is the Boltzmann constant, $m$ is the mass of the particle, and $T$ the absolute temperature.
The value of the (continuous) SMI for this probability density is:
$$H_{\max }(\text { velocity in } 1 D)=\frac{1}{2} \log \left(2 \pi e k_B T / m\right)$$

## 数学代写|信息论代写Information Theory代考|First Step: The Locational SMI of a Particle in a $1 D$ Box of Length $\mathrm{L}$

$$H[f(x)]=-\int f(x) \log f(x) d x$$

$$f_{e q}(x)=\frac{1}{L}$$

$$H(\text { locations in } 1 D)=\log L$$

$$H\left(\text { locations in 1D) }=\log L-\log h_x\right.$$

## 数学代写|信息论代写Information Theory代考|Second Step: The Velocity SMI of a Particle in a $1 D$ “Box” of Length $\mathrm{L}$

$$\begin{gathered} \int_{-\infty}^{\infty} f(x) d x=1 \ \int_{-\infty}^{\infty} x^2 f(x) d x=\sigma^2=\text { constant } \end{gathered}$$
Shannon[5]在其原始论文中证明了(2.1)中SMI最大的函数$f(x)$在满足(2.5)和式(2.6)两个条件下为正态分布，即:
$$f_{e q}(x)=\frac{\exp \left[-x^2 / 2 \sigma^2\right]}{\sqrt{2 \pi \sigma^2}}$$

$$\sigma^2=\frac{k_B T}{m}$$

$$f_{e q}\left(v_x\right)=\sqrt{\frac{m}{2 \pi k_B T}} \exp \left[\frac{-m v_x^2}{2 k_B T}\right]$$

$$H_{\max }(\text { velocity in } 1 D)=\frac{1}{2} \log \left(2 \pi e k_B T / m\right)$$

## Matlab代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。