What I’ve been working on lately: LNKNLogs

My last couple of posts can be construed as being negative as I was complaining about something that was bothering me at the time. Even though I tried to offer solutions and promote discussion (both of which I think are very positive things), I don’t want this to become a place to complain. After all, I started this with the idea of it being a research blog, so today that’s where I want to focus.

As of late, I’ve been working on a number of projects involving the galaxy power spectrum which is key to our understanding of the evolution of the Universe. As there are a number of very large galaxy redshift surveys planned for the near future, people are working on methods to optimize our analysis of the data. One of the key things to keep in mind when proposing new or improved methods is that you don’t introduce any bias or distortion in the shape of the power spectrum. Because of that, it is very useful to have mock galaxy catalogs for which you know the exact shape of the power spectrum.

Of course, if you take some publicly available mock catalogs, particularly if those come from a method like second order Lagrangian perturbation theory (2LPT) which start from some primordial power spectrum, you cannot be sure of the shape. Because of this, for the last paper I worked on, I wrote some code that created mock galaxy catalogs using the lognormal method. These mocks were good enough for that particular project, though they were limited in many ways.

The mocks were created with a uniform number density throughout a cubic volume equivalent to the volume of a segment of a spherical shell that would have been observed in a particular survey. They contained no redshift evolution of the galaxy bias, linear growth factor, or power spectrum normalization, and the anisotropy induced by redshift space distortions was limited to a single direction as we were using the Kaiser approximation, also called the far field approximation.

Future projects that I will be working on will benefit from having mock catalogs which are much more survey-like, which can be achieved by adding in the features listed as missing above, as well as having anisotropies along a radial line-of-sight, as they would be in a large survey. To have the radial anisotropies, the easiest thing to do is to create an isotropic galaxy distribution and peculiar (meaning arising from gravitational interactions) velocities. So, after much reading in the literature, I found some extremely vague descriptions of how to do this (seriously, many of the papers only gave a proportionality and made no mention of what the constants of proportionality may be).

In principle, the idea is actually fairly simple. When creating a lognormal mock catalogue, at one point you have Gaussian density field in frequency space (i.e. the Fourier transform of some Gaussian density field in real space). One of the most popular methods of calculating gravitational forces in N-body simulations, is to discretize your particles on a grid, and perform a fast Fourier transform to solve Poisson’s equation (see this wikipedia page for the details).  Also, if you know the gravitational acceleration at a point, you should be able to relate that to the velocity at that point in linear theory,

\mathbf{v} = \dfrac{H_{0}f}{4\pi G\bar{\rho}}\mathbf{g}.

So, it should be fairly simple then to generate velocities, and in fact it is quite simple, but there is a problem. The velocities that are generated are reasonable, they follow a Gaussian distribution, as you might expect since they are proportional to a Gaussian density field, they point in a physically reasonable direction (i.e. towards density peaks) and they even give you a power spectrum quadrupole of more-or-less the right amplitude and shape. As you might guess, the problem is in the more-or-less part of that statement.

For some reason, the quadrupole has a small positive bias that increases as k increases. So, for the better part of a month now, I’ve been trying a large number of things to see what may be causing this bias, and so far I’ve had no luck in making it go away or tracking down its cause.

I thought maybe it was aliasing, but typically aliasing causes a decrease in amplitude near the Nyquist frequency when using cloud-in-cell interpolation for the grid assignments (see this paper), and since I’m not trying to measure anything near the Nyquist frequency and seeing a positive bias, that seems to not be the case. I thought maybe it was the specific input power that was a problem, and the results here have been less than illuminating. I input some very artificial power spectra, a Gaussian and a pure power law, then generated some mocks. The Gaussian showed an increasing positve bias with increasing k, but the power law had only very minor increasing bias (at most ~3%).

There were other things that I looked at and considered, of course, but none have been successful at giving me a catalog that reproduces a power spectrum to match what I expect. It could be that linear theory is breaking down. I’m performing another test as I write this. Hopefully soon I will have this working so I can begin to make modifications to incorporate further survey like features.

If you have any experience with this, and happen to stumble upon this blog post, please feel free to share any thoughts/advice you may have in the comments.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s