# Taylor expansions for random-walk polymers

Original entry by Joerg Fritz, AP225 Fall 2009

## Contents

## Keywords

Random walk, Taylor expansion, Symmetry, Dimensionality

## Outline

The end-to-end distance of a polymer undergoing a random walk can be related to the number of segments by a simple scaling. A very elegant derivation of this scaling, based on Witten's book "Structured fluids", has been presented in class and can be found in detail here.

In this entry we investigate a small aspect of the arguments presented to produce this scaling. We specifically ask the following question: Can we determine how the probability distribution <math>p</math> for the length of the end-to-end vector changes with the number of segments <math>n</math> in the polymer, if the number of segments is large? Or shorter, what is <math>dp(n,r)/dn</math> in the limit for large <math>n</math>?

The argument in Witten is based on the following assumptions:

- A chain with length <math>n+1</math> can be thought to be (because it actually is) created by adding a single segment to a polymer with <math>n</math> segments. Thus its probability distribution is determined by the summation of all different ways these two events can be combined to achieve the desired outcome: The last segment has a certain vector <math>\vec{s}</math> and the original <math>n</math> segment polymer has an end-to-end vector of <math>\vec{r} - \vec{s}</math>. (Note on notation: In Witten <math>\vec{s}</math> is called <math>\vec{r}_1</math>, but since we will use index notation later, this might be confusing)
- To be a truly random walk, the end-to-end probability for a single segment <math>p(1,\vec{s})</math> can only depend on the magnitude of <math>\vec{s}</math>, especially, it must be completely independent from the orientation of the last segment, thus <math>p(1,\vec{s})=p_0(s)</math>
- The probability distribution can be normalized so that <math> \int p(n,r) d^3r = 1</math>

In order to arrive at the desired solution Witten uses a Taylor expansion for the the probability <math>p(n,\vec{r}-\vec{r}_1)</math> around <math>p(n,r)</math> of the form

<math>p(n,\vec{r}-\vec{s}) = p(n,\vec{r}) - \vec{s} \cdot \nabla p(n,r) + \frac{1}{6} s^2 \nabla^2 p(n,r) + \dots</math>

While the first two terms of the Taylor expansion seem to be right on an intuitive basis, the third is more problematic, with a coefficient <math>\frac{1}{6}</math> where one would expect the familiar <math>\frac{1}{2}</math> and a slightly confusing notation. We will more specifically investigate how to get to this Taylor expansion and what exactly it means. Our conclusion will be that this form of the equation is actually wrong in the framework presented by Witten and can only be explained by using additional assumptions. We will also see that the components of the Taylor expansion ignored by Witten are canceled out in the next step so the final result of his calculation remains valid.

## Taylor expansion in several variables

The Taylor expansion for a scalar function <math>y</math> in several variables (like our probability distribution) around <math>\mathbf{x}</math> can in general be written as

<math>y=f(\mathbf{x}+\Delta\mathbf{x})\approx f(\mathbf{x}) + \nabla f(\mathbf{x}) \cdot \Delta \mathbf{x} +\frac{1}{2} \Delta\mathbf{x}^\mathrm{T} H(\mathbf{x}) \Delta\mathbf{x}</math>

where *H* is the Hessian defined as

<math>H = \begin{bmatrix} \frac{\partial^2 f}{\partial x_1^2} & \frac{\partial^2 f}{\partial x_1\,\partial x_2} & \cdots & \frac{\partial^2 f}{\partial x_1\,\partial x_n} \\ \\ \frac{\partial^2 f}{\partial x_2\,\partial x_1} & \frac{\partial^2 f}{\partial x_2^2} & \cdots & \frac{\partial^2 f}{\partial x_2\,\partial x_n} \\ \\ \vdots & \vdots & \ddots & \vdots \\ \\ \frac{\partial^2 f}{\partial x_n\,\partial x_1} & \frac{\partial^2 f}{\partial x_n\,\partial x_2} & \cdots & \frac{\partial^2 f}{\partial x_n^2} \end{bmatrix}</math>

If we apply these rules to the expansion done by Witten (let's use index notation to make our life easier) we get

<math>p(n,\vec{r}-\vec{s}) = p(n,r) - \frac{\partial p(n,r)}{\partial r_i} s_i + \frac{1}{2} \frac{\partial^2 p(n,r)}{\partial r_i \partial r_j} s_i s_j + \dots</math>

where every index (<math>i</math> and <math>j</math>) is summed over the dimensions of the space considered (<math>3</math> in our case) if they appear more than twice in one term. Note that we have exchanged <math>p(n,\vec{r})</math> with <math>p(n,r)</math>, if the walk is truly random then no spacial direction can be special, our problem has to be isotropic. If we want to get a Laplacian <math>\nabla^2</math> into this expansion we have to split the second order term into derivatives with only one variable and those with mixed derivatives, like this

<math>p(n,\vec{r}-\vec{s}) = p(n,r) - \frac{\partial p(n,r)}{\partial r_i} s_i + \underbrace{\frac{1}{2} \frac{\partial^2 p(n,r)}{\partial^2 r_i} s_i^2}_A + \underbrace{\frac{1}{2} \frac{\partial^2 p(n,r)}{\partial r_i \partial r_j} s_i s_j (1 - \delta_{ij})}_B + \dots</math>

where the last factor in *B* makes sure that the derivatives in one variable are not counted twice. This is identical to the formula presented by Witten if and only if

<math>A = \frac{1}{6} s^2 \nabla^2 p(n,r)</math>

and

<math>B= 0</math>

By writing out the terms it is easy to see that the first condition is only true for symmetric vectors (where every component has the same magnitude <math>k</math>), since then <math>k^2 + k^2 + k^2 = s^2</math> and so <math>k^2 = 1/3 s^2</math>. But in general

<math>\frac{1}{2} \left( \frac{\partial^2 p}{\partial r_1^2} s_1^2 + \frac{\partial^2 p}{\partial r_2^2} s_2^2 + \frac{\partial^2 p}{\partial r_3^2} s_3^2 \right) \neq \frac{1}{2} \left( \frac{\partial^2 p}{\partial r_1^2} \frac{s^2}{3} + \frac{\partial^2 p}{\partial r_2^2} \frac{s^2}{3} + \frac{\partial^2 p}{\partial r_2^2} \frac{s^2}{3} \right) </math>

and thus

<math>A \neq \frac{1}{6} s^2 \nabla^2 p(n,r)</math>

Using the same arguments, i.e. writing down the terms explicitly, we can see that there is also no reason why *B* should be zero. This would only be true for vectors with only a single component, or very special distributions <math>p(n,r)</math>. Since we want to determine the form of <math>p(n,r)</math> and <math>\vec{s}</math> can be any vector we choose, in general

<math>B \neq 0</math>

## Why it still works

So the Taylor expansion is "wrong", but the result is correct, it can be derived in many different ways using concepts developed for random walks. How can this be true? The answer lies in the next step performed in Witten's derivation. According to our assumption 1 we can determine the probability distribution for a polymer with length *r* and a number of segments of *n+1* as

<math>p(n=1,r) = \int p_0(s) p(n, \vec{r} - \vec{s}) d^3s</math>

We can substract <math>p(n,r) </math> from both sides of the equation and associate <math>\frac{p(n+1,r)-p(n,r)}{1}</math> with <math>dp/dn</math> for large *n* to get

<math>\frac{dp(n,r)}{dn} = \int p_0(s) \left[p(n, \vec{r} - \vec{s}) - p(n,r) \right] d^3s </math>

Note that we have now assumed that *n* is a continuous variable and not an integer. If we plug our (correct) Taylor expansion into the integral we get

<math>\frac{dp(n,r)}{dn} = \int p_0(s) \left[ - \frac{\partial p(n,r)}{\partial r_i} s_i + \frac{1}{2} \frac{\partial^2 p(n,r)}{\partial^2 r_i} s_i^2 + \frac{1}{2} \frac{\partial^2 p(n,r)}{\partial r_i \partial r_j} s_i s_j (1 - \delta_{ij}) + \dots \right] d^3s</math>

The derivatives do not depend on *s* so we can write

<math>\frac{dp(n,r)}{dn} = - \frac{\partial p(n,r)}{\partial r_i} \int p_0(s) s_i d^3s + \frac{1}{2} \frac{\partial^2 p(n,r)}{\partial^2 r_i} \int p_0(s) s_i^2 d^3s + \frac{1}{2} \frac{\partial^2 p(n,r)}{\partial r_i \partial r_j} \int p_0(s) s_i s_j (1 - \delta_{ij}) d^3s + \dots </math>

Since we have a random walk, it should be equally possible to step in every direction, thus since <math>p_0(s)</math> is also isotropic <math>\int p_0(s) s_i d^3s = 0 </math>. Another way to think about this is: This integral actually calculates <math>\left \langle \vec{s} \right \rangle</math>, the average direction <math>\vec{s}</math> is pointing to. For a random walk this should of course be zero.

In this way we can also think about <math>\int p_0(s) s_i s_j (1 - \delta_{ij}) d^3s</math>, which effectively calculates <math>\langle s_i s_j \rangle</math> for <math>i \neq j</math>. This should be zero since the step in direction *i* must be completely independent from the step in direction *j*, otherwise our walk is not random.

For the term with the Laplace operator, we can use the same argument. <math>\int p_0(s) s_i^2 d^3s</math> calculates <math>\langle s_i^2 \rangle</math>. If this is a random walk, then the average should be the same for all directions, or <math>\langle s_1^2 \rangle=\langle s_2^2 \rangle = \langle s_3^2 \rangle </math> and from geometric considerations we get <math>\langle s_i^2 \rangle =\frac{s^2}{3}</math>. With these observations we get

<math>\frac{dp(n,r)}{dn} = \frac{1}{6} \langle s^2 \rangle \nabla^2 p(n,r) + \dots </math>

and we are back on track for the derivation presented in Witten. Note also how the <math>p_0(s)</math> makes sure that every value of <math>s</math> contributes to a certain degree to the final value, thus enforcing our assumption 3.

## A slightly more intuitive explanation

The most non-intuitive part of the treatment above is probably the claim that the integrals of the form <math>\int p_0(s) s_i d^3s</math> somehow correspond to an averaging processes. This can be best understood if we consider a polymer of the "Flory type". This polymer is restricted to 2D motion and segments can only be formed along a square grid. This also means that every step must have the same length *a*, where a is the grid size. A step in each direction has a probability of <math>1/4</math> and the equivalent of the above integral would be

<math>\int p_0(s) s_i^2 d^3s \approx \sum_{k=1}^4 \frac{1}{4} {(s_x)}_k^4</math> where <math>s_i</math> is the i-th component of the step vector <math>\vec{s}</math> with length *a*. Using this framework the averaging process is easier to understand.

Note: In 2D the mysterious <math>\frac{1}{6}</math> is actually a <math>\frac{1}{4}</math>.

## Other ways to explain these laws

The presentation in Witten is very elegant, considering how far reaching it's conclusions are given a very limited number of assumptions. If one is only interested in the general scaling argument <math>\langle r^2 \rangle^{1/2} = n^{1/2}</math>, i.e. the length of the end-to-end vector scales like the square root of the number of segments a much easier argument is possible. And this might be reasonable, many of the most fascinating conclusions can be made utilizing only this scaling.

Our polymer with end-to-end vector <math>\vec{r}</math> is clearly made up of a number *n* of small segments with the vector <math>\vec{s}_k</math> each. So

<math>\vec{r} = \sum_{k=1}^n \vec{s}_k</math>

What we are interested in is essentially

<math>\langle {\vec{r}}^{\ 2}\rangle = \left \langle \left( \sum_{k=1}^n \vec{s}_k \right) \cdot \left( \sum_{l=1}^n \vec{s}_l \right) \right \rangle</math>

and since the summations are independent

<math>\langle {\vec{r}}^{\ 2}\rangle = \left \langle \sum_{k=1}^n \sum_{l=1}^n (\vec{s}_k \vec{s}_l) \right \rangle </math>

the summations are also independent of the averaging calculation

<math>\langle {\vec{r}}^{\ 2}\rangle = \sum_{k=1}^n \sum_{l=1}^n \left \langle \vec{s}_k \vec{s}_l \right \rangle </math>

since there can be no correlation between different steps for a truly random walk all mixed terms must be zero

<math>\langle {\vec{r}}^{\ 2}\rangle = \sum_{k=1}^n \left \langle \vec{s}_k^{\ 2} \right \rangle </math>

and we thus get

<math>\langle {\vec{r}}^{\ 2}\rangle = n \langle \vec{s}_k^{\ 2} \rangle </math>

or in terms of our original objective

<math>\langle r^2 \rangle^{1/2} = \sqrt{n} \langle \vec{s}_k^{\ 2} \rangle^{1/2} </math>

or in words: The length of the end-to-end vector scales like the square root of the number of segments and is linear to the average length of the individual segments.

A modification to the calculation in Witten is also possible. Since we assumed that <math>n</math> is a continuous variable anyway, we could immediately do a Taylor expansion of <math>p(n+1,\vec{r}-\vec{s})</math> around <math>p(n,r)</math> treating *n* as just another independent variable in the expansion. This gives the same result as above with a smaller amount of steps.