I plan to investigate electron density distributions (using dispersion measures) using IllustrisTNG along different sightlines. I aim to see how different types of large-scale structure contribute to them. I have a question regarding whether to use TNG100 or TNG300, considering the tradeoff beetween size and resolution of the two runs.
I will stack snapshots (at the correct comoving distance) to go to larger redshifts than a single snapshot allows. I know that it is not possible to investigate fluctuations larger than the size of an individual snapshot.
In current literature, studies e.g. Takahashi+20 claim electron fluctuations dominate at scales of about 1Mpc. Others, e.g. Jarozynski+19 claim that matter distributions correlate on scales of tens of Mpc.
Takahashi+20 and Pol+19 show that the contribution from these large scale structures is to the tail of a DM distribution. However, Takahashi+20 say fluctuations on scales larger than TNG300 aren't all that important (they show, e.g. in their Fig. 3, that the power spectrum drops off by ~10 Mpc). Dolag+15 do similar investigations with the Magneticum Pathfinder simulation and conclude that the difference in results between two simulation sizes they investigate (400 vs 900 Mpc/h) aren't that important
My question is this: A conclusion could be drawn from the above that, as the dominant scale of fluctuations (1-10 Mpc) is resolved within TNG100, structures on scales > 100 Mpc+ aren't that important, and so when investigating larger scales it doesn't matter whether you use TNG300 or TNG100. Combined with the fact that TNG100 resolves smaller scales better than TNG300, one might conclude using TNG100 is better. Does this agree with any other literature you are aware of the subject?
A potential complication is that TNG100 is much smaller than TNG300. It will therefore have fewer of the large-scale structures (e.g. large voids/filaments) within an individual box. Do these structures have significant variations in, e.g., density which means a large a sample as possible is required for statistical analysis, so we should use TNG300? Or are they all similar enough that there will be enough of the structures in TNG100?
Thanks again for all your advice so far,
As you say, if TNG100 is large enough, then one would tend to prefer using this run, not only because it has higher numerical resolution, but also because it has slightly more realistic galaxy properties. This could then indirectly affect things like DM statistics. (Galaxy properties, i.e. the general result of the simulation and level of quantitative agreement with observational data, differs between TNG300 and TNG100 because of TNG300's lower resolution).
That said, the only/best way to know for sure is just to check: TNG100-2 has the same resolution as TNG300, but with the smaller volume and identical ICs/large-scale structure statistics. So, if you compare your analysis outcome on TNG100-1 versus TNG100-2, and TNG100-2 versus TNG300-1, you can isolate the separate impact of volume and resolution.
Thanks again for some sound advice. Your comparison idea is a good one, and it slipped my notice that TNG100-2 and TNG300-1 had identical resolutions. I'll look into it further.