I just wanted some advice on the best way to get data into a pandas dataframe. In particular, I would like to have dataframes at multiple snapshots of the same subhalos where each individual subhalo is a row. I'm currently using API methods but whatever method is the simplest would be excellent.
If you want "columns" to be time, i.e. different snapshots, then you need to use the merger tree.
If you load data for a particular subhalo with il.sublink.loadTree(), this returns a dictionary with many fields. Each should have the same size along the first dimension, which corresponds to the snapshot number. It should be fairly easy to have pandas convert this dictionary into a dataframe.
Thank you for the advice, I might actually go with different dataframes entirely for different snapshots, but I'll try using il.sublink.loadTree() and converting the dictionary it returns into a dataframe.
Quick follow-up. I'm noticing that use API to get ~44,000 URLs is taking a long time. Is there any way I can get the subhalo information (i.e. stellar mass, sfr, radius) without having to submit so many URLs?
If you want to obtain information from the subhalo catalog for so many objects, I would suggest to simply download this catalog ahead of time.
(A walkthrough of this is exactly how the Example Scripts tutorial begins.)