Note
This document is not up-to-date!
Here we provide an overview of some currently supported community codes in AMUSE, along with a concise explanation of how each code works. This document serves as an initial guide in finding the code with the highest applicability to a specific astrophysical problem. The supported codes have been sorted according to their astrophysical domain:
The general parameters and methods for the gravitational dynamics are described in Stellar Dynamics Interface Definition. Here we describe the exceptions for the specific codes under “Specifics”.
Note: holds for AMUSE implementation.
name |
approximation scheme |
timestep scheme |
CPU |
GPU |
GRAPE |
language |
stopcond (1) |
parallel (2) |
---|---|---|---|---|---|---|---|---|
bhtree |
tree |
shared/fixed |
Y |
N |
N |
C/C++ |
CST |
N |
bonsai |
tree |
|||||||
fi |
tree |
block/variable |
Y |
N |
N |
FORTRAN |
S |
N |
hermite |
direct |
shared/variable |
Y |
N |
N |
C/C++ |
CSOPT |
Y |
HiGPUs |
direct |
block time steps |
N |
Y |
N |
C/C++ |
Y (on gpus cluster) |
|
huayno |
Approx symplectic |
Y |
Y(opencl) |
N |
C |
N |
||
gadget |
tree |
individual |
Y |
N |
N |
C/C++ |
S |
Y |
mercury |
MVS symplectic |
Y |
N |
N |
FORTRAN |
N |
||
octgrav |
tree |
shared |
N |
Y |
N |
C/C++ |
S |
N |
rebound |
N |
|||||||
phigrape |
direct |
block/variable |
Y g6 |
Y sapporo |
Y |
FORTAN |
CSPT |
N |
smallN |
direct Hermite 4th order |
individual |
Y |
N |
N |
C/C++ |
N |
|
twobody |
universal variables, Kepler eq. |
none, exact |
Y |
N |
N |
Python |
N |
stopping conditions
code |
name of stopping condition |
---|---|
C |
Collision detection |
E |
Escaper detection |
S |
Number of steps detection |
O |
Out of box detection |
P |
Pair detection |
T |
Timeout detection |
Parallel in the following sense: AMUSE uses MPI to communicate with the codes, but for some codes it can be used to parallelize the calculations. Some codes (GPU) are already parallel, however in this table we do not refer to that.
Codes designated Y for parallel can set the number of (parallel) workers, e.g. to set 10 workers for hermite do:
>>> instance = Hermite(number_of_workers=10)
N-body integration module. An implementation of the Barnes & Hut tree code [1] by Jun Makino BHTree-code.
name |
default value |
unit |
description |
---|---|---|---|
use_self_gravity |
1 |
none |
flag for usage of self gravity, 1 or 0 (true or false) |
timestep |
0.015625 |
time |
time step |
epsilon squared |
0.125 |
length*length |
smoothing parameter for gravity calculations >0! |
ncrit_for_tree |
1024 |
none |
maximum number of p articles sharing an interaction list |
opening_angle |
0.75 |
none |
opening angle, theta, for building the tree: between 0 and 1 |
stopping_conditions_number_of_steps |
1 |
none |
|
stopping_conditions_timeout |
4.0 |
seconds |
|
stopping_conditions_out_of_box_size |
0.0 |
length |
|
time |
0.0 |
time |
current simulation time |
dt_dia |
1.0 |
time |
time interval between diagnostics output |
smoothing parameter for gravity calculations (default value:0.125 length * length)
constant timestep for iteration (default value:0.015625 time)
opening angle, theta, for building the tree: between 0 and 1 (default value:0.75)
flag for usage of self gravity, 1 or 0 (true or false) (default value:1)
Ncrit, the maximum number of particles sharing an interaction list (default value:12)
time interval between diagnostics output (default value:1.0 time)
model time to start the simulation at (default value:0.0 time)
max wallclock time available for the evolve step (default value:4.0 s)
max inner loop evals (default value:1.0)
size of cube (default value:0.0 length)
minimum density of a gas particle (default value:-1.0 mass / (length**3))
maximum density of a gas particle (default value:-1.0 mass / (length**3))
minimum internal energy of a gas particle (default value:-1.0 length**2 / (time**2))
maximum internal energy of a gas particle (default value:-1.0 length**2 / (time**2))
if True use the center of mass to determine the location of the box, if False use (0,0,0), is not used by all codes (default value:False)
example
>>> from amuse.community.bhtree.interface import BHTreeInterface, BHTree
>>> from amuse.units import nbody_system
>>> instance = BHTree(BHTree.NBODY)
>>> instance.parameters.epsilon_squared = 0.00001 | nbody_system.length**2
Time-symmetric N-body integration module with shared but variable time step (the same for all particles but its size changing in time), using the Hermite integration scheme [2]. See also : ACS
name |
default value |
unit |
description |
---|---|---|---|
pair factor |
1.0 |
none |
radius factor for pair detec tion |
dt_param |
0.03 |
none |
timestep scaling factor |
epsilon squared |
0.0 |
length*length |
smoothing parameter for gravity calculations |
stopping_conditions_number_of_steps |
1 |
none |
|
stopping_conditions_timeout |
4.0 |
seconds |
|
stopping_conditions_out_of_box_size |
0.0 |
length |
|
time |
0.0 |
time |
current simulation time |
dt_dia |
1.0 |
time |
time interval between diagnostics output |
smoothing parameter for gravity calculations (default value:0.0 length * length)
timestep scaling factor (default value:0.03)
0.0 will stop at exactly the end time
> 0.0 will stop on or after the end time
Valid factors are between -1.0 and 1.0 (default value:0.0)
time interval between diagnostics output (default value:1.0 time)
if True will calculate back in time when evolve_model end time is less than systemtime (default value:False)
model time to start the simulation at (default value:0.0 time)
max wallclock time available for the evolve step (default value:4.0 s)
max inner loop evals (default value:1.0)
size of cube (default value:0.0 length)
minimum density of a gas particle (default value:-1.0 mass / (length**3))
maximum density of a gas particle (default value:-1.0 mass / (length**3))
minimum internal energy of a gas particle (default value:-1.0 length**2 / (time**2))
maximum internal energy of a gas particle (default value:-1.0 length**2 / (time**2))
if True use the center of mass to determine the location of the box, if False use (0,0,0), is not used by all codes (default value:False)
Hut, P., Makino, J. & McMillan, S., 1995, ApJL 443, L93.
phiGRAPE is a direct N-body code optimized for running on a parallel GRAPE cluster. See Harfst et al. [3] for more details. The Amusean version is capable of working on other platforms as well by using interfaces that mimic GRAPE hardware.
Sapporo is a library that mimics the behaviour of GRAPE hardware and uses the GPU to execute the force calculations [4].
This version of Sapporo is without multi-threading support and does not need C++. This makes it easier to integrate into fortran codes, but beware, it can only use one GPU device per application!
Library which mimics the behavior of GRAPE and uses the CPU. Lowest on hw requirements.
Amuse tries to build all implementations at compile time. In the phiGRAPE interface module the preferred mode can be selected whith the mode parameter:
Just make it work, no optimizations, no special hw requirements
Using sapporo, CUDA needed.
Using GRAPE hw.
Phantom grape, optimized for x86_64 processors
>>> from amuse.community.phigrape.interface import PhiGRAPEInterface, PhiGRAPE
>>> instance = PhiGRAPE(PhiGRAPE.NBODY, PhiGRAPEInterface.MODE_GPU)
The default is MODE_G6LIB.
name |
default value |
unit |
description |
---|---|---|---|
initialize_gpu_once |
0 |
none |
set to 1 if the gpu must only be initialized once, 0 if it can be initialized for every callnIf you want to run multiple instances of the code on the same gpu this parameter needs to be 0 (default) |
initial_timestep_parameter |
0.0 |
none |
parameter to determine the initial timestep |
timestep_parameter |
0.0 |
none |
|
epsilon squared |
0.0 |
length*length |
smoothing parameter for gravity calculations >0! |
stopping_conditions_number_of_steps |
1 |
none |
|
stopping_conditions_timeout |
4.0 |
seconds |
|
stopping_conditions_out_of_box_size |
0.0 |
length |
smoothing parameter for gravity calculations (default value:0.0 length * length)
timestep parameter (default value:0.02)
parameter to determine the initial timestep (default value:0.01)
set to 1 if the gpu must only be initialized once, 0 if it can be initialized for every call If you want to run multiple instances of the code on the same gpu this parameter needs to be 0 (default) (default value:0)
model time to start the simulation at (default value:0.0 time)
max wallclock time available for the evolve step (default value:4.0 s)
max inner loop evals (default value:1.0)
size of cube (default value:0.0 length)
minimum density of a gas particle (default value:-1.0 mass / (length**3))
maximum density of a gas particle (default value:-1.0 mass / (length**3))
minimum internal energy of a gas particle (default value:-1.0 length**2 / (time**2))
maximum internal energy of a gas particle (default value:-1.0 length**2 / (time**2))
if True use the center of mass to determine the location of the box, if False use (0,0,0), is not used by all codes (default value:False)
>>> instance.timestep_parameter = 0.1 |nbody_system.time
Harfst, S., Gualandris, A., Merritt, D., Spurzem, R., Portegies Zwart, S., & Berczik, P. 2006, NewAstron. 12, 357-377.
Gaburov, E., Harfst, S., Portegies Zwart, S. 2009, NewAstron. 14 630-637.
Semi analytical code based on Kepler [5]. The particle set provided has length one or two. If one particle is given, the mass is assigned to a particle in the origin and the phase-coordinates are assigned to the other particle. This is usefull when m1 >> m2.
Bate, R.R, Mueller, D.D., White, J.E. “FUNDAMENTALS OF ASTRODYNAMICS” Dover 0-486-60061-0
Interface to the Kira Small-N Integrator and Kepler modules from Starlab. https://www.sns.ias.edu/~starlab/
You will need to download Starlab from the above site, make it, install it, and then set the STARLAB_INSTALL_PATH variable to be equal to the installation directory (typically something like ~/starlab/usr).
Starlab is available under the GNU General Public Licence (version 2), and is developed by:
Piet Hut
Steve McMillan
Jun Makino
Simon Portegies Zwart
Other Starlab Contributors:
Douglas Heggie
Kimberly Engle
Peter Teuben
name |
default value |
unit |
description |
epsilon squared |
0.0 |
length*length |
smoothing parameter for gravity calculations >0! |
number_of_particles |
0.0 |
none |
Tree-code which runs on GPUs with NVIDIA CUDA architecture. [6]
name |
default value |
unit |
description |
---|---|---|---|
opening_angle |
0.8 |
none |
opening angle for building the tree between 0 and 1 |
timestep |
0.01 |
time |
constant timestep for iteration |
epsilon squared |
0.01 |
length*length |
smoothing parameter for gravity calculations |
stopping_conditions_number_of_steps |
1 |
none |
|
stopping_conditions_timeout |
4.0 |
seconds |
|
stopping_conditions_out_of_box_size |
0.0 |
length |
smoothing parameter for gravity calculations (default value:0.01 length * length)
constant timestep for iteration (default value:0.01 time)
opening angle for building the tree between 0 and 1 (default value:0.8)
model time to start the simulation at (default value:0.0 time)
max wallclock time available for the evolve step (default value:4.0 s)
max inner loop evals (default value:1.0)
size of cube (default value:0.0 length)
minimum density of a gas particle (default value:-1.0 mass / (length**3))
maximum density of a gas particle (default value:-1.0 mass / (length**3))
minimum internal energy of a gas particle (default value:-1.0 length**2 / (time**2))
maximum internal energy of a gas particle (default value:-1.0 length**2 / (time**2))
if True use the center of mass to determine the location of the box, if False use (0,0,0), is not used by all codes (default value:False)
Gaburov, E., Bedorf, J., Portegies Zwart S., 2010, “Gravitational tree-code on graphics processing units: implementations in CUDA”, ICCS
name |
default value |
unit |
description |
---|---|---|---|
opening_angle |
0.8 |
none |
opening angle for building the tree between 0 and 1 |
timestep |
0.01 |
time |
constant timestep for iteration |
epsilon squared |
0.01 |
length*length |
smoothing parameter for gravity calculations |
stopping_conditions_number_of_steps |
1 |
none |
|
stopping_conditions_timeout |
4.0 |
seconds |
|
stopping_conditions_out_of_box_size |
0.0 |
length |
Mercury is a general-purpose N-body integration package for problems in celestial mechanics.
This package contains some subroutines taken from the Swift integration package by H.F.Levison and M.J.Duncan (1994) Icarus, vol 108, pp18. Routines taken from Swift have names beginning with drift or orbel.
The standard symplectic (MVS) algorithm is described in J.Widsom and M.Holman (1991) Astronomical Journal, vol 102, pp1528.
The hybrid symplectic algorithm is described in J.E.Chambers (1999) Monthly Notices of the RAS, vol 304, pp793.
Currently Mercury has an interface that differs from the other grav dyn interfaces. The class is called MercuryWayWard and will loose its predicate once the interface is standardized (work in progress). It handles two kinds of particles: centre_particle and orbiters. The centre_particle is restricted to contain only one particle and it should be the heaviest and much heavier than the orbiters. Its at the origin in phase space.
Appart from the usual phase space coordinate, particles have a spin in mercury and the centre particle has oblateness parameters expressed in moments 2, 4 and 6. Orbiters have density.
Furthermore, mercury does not use nbody units but instead units as listed in table.
mass |
MSun |
|
radius |
AU |
|
oblateness |
j2, j4, j6 |
AU^2 etc. |
angluar momentum |
MSun AU ^2 day ^-1 |
mass |
MSun |
||
density |
g/cm^3 |
||
position |
x, y, z |
AU |
|
velocity |
vx, vy, vz |
AU/day |
|
celimit |
close encounters |
Hill radii, but units.none in amuse |
|
angluar momentum |
Lx, Ly, Lz |
MSun AU ^2 day ^-1 |
Hierarchically split-Up Approximately sYmplectic N-body sOlver (HUAYNO)
Inti Pelupessy - january 2011
HUAYNO is a code to solve the astrophysical N-body problem. It uses recursive Hamiltonian splitting to generate multiple-timestep integrators which conserve momentum to machine precision. A number of different integrators are available. The code has been developed within the AMUSE environment. It can make use of GPUs - for this an OpenCL version can be compiled.
Use of the code is the same as any gravity the code within AMUSE. There are three parameters for the code: Smoothing length (squared) eps2, timestep parameter eta and a parameter to select the integrator:
timestep_parameter( eta ): recommended values eta=0.0001 - 0.05 epsilon_squared (eps2) : eps2 can be zero or non-zero inttype_parameter( inttype ): possible values for inttype are described below.
Miscellaneous:
The code assumes G=1,
Collisions are not implemented (needs rewrite),
workermp option can be used for OpenMP parallelization,
The floating point precision of the calculations can be changed by setting the FLOAT and DOUBLE definitions in evolve.h. FLOAT sets the precision of the calculations, DOUBLE the precision of the position and velocity reductions. They can be set to e.g. float, double, long double or __float128 It is advantageous to choose set DOUBLE at a higher precision than FLOAT. recommended is the combination: double/ long double
the AMUSE interface uses double precision
It is unlikely the integer types in evolve.h would need to be changed (INT should be able to hold particle number, LONG should be able to hold interaction counts)
OpenCL operation:
By compiling worker_cl it is possible to offload the force and timestep loops to the GPU. The implementation is based on the STDCL library (www.browndeertechnology.com) so this library should be compiled first. In the Makefile the corresponding directories should point to the installation directory of STDCL. The define in evolce_cl.h should be set appropiately for the OpenCL configuration: CLCONTEXT: stdgpu or stdcpu NTHREAD: 64 for GPU, 2 for CPU (actually 64 will also work for CPU just fine) BLOCKSIZE: number of particles stored in local memory (64 good starting guess) evolve_kern.cl contains the OpenCL kernels. Precision of the calculation is controlled by FLOAT(4) defines in evolve_kern.cl and CLFLOAT(4) in evolve.h. They should agree with each other (i.e. float and cl_float or double and cl_double)
Giersz, M. 1998, MNRAS, 298, 1239 Giersz, M. 2001, MNRAS, 324, 218 Giersz, M. 2006, MNRAS, 371, 484
name |
description |
|
---|---|---|
irun |
initial sequence of random numbers |
|
nt |
total number o f objects (sta rs and binarie s) at T=0 ns - number of single tars, nb - number o f binaries, nt ns+nb, nss - number of star s(nss = nt+nb) |
|
istart |
1 - initial model, .ne.1 - restart |
|
ncor |
number of stars to calculate the central parameters |
|
nmin |
minimum number of stars to calculate the central parameters |
|
nz0 |
number of stars in each zone at T=0 |
|
nzonc |
minimum number of zones in the core |
|
nminzo |
minimum number of stars in a zone |
|
ntwo |
maximum index of 2 |
|
imodel |
initial model: 1- uniform & isotropic, 2- Plummer, 3- King, 4 - M67 |
|
iprint |
0- full diagnostic information, 1- diagnostic info. suppressed |
|
ib3f |
1 - Spitzer’s, 2 - Heggie’s formula for three-body binary interaction with field stars, 3 - use Pmax for interaction * probability 4 - three- and four-body numerical integration |
|
iexch |
0 - no exchange in any interactions, 1 - exchange only in binary field star interacions, 2 - exchange in all interactions (binary - field and binary - binary) |
|
tcrit |
termination time in units of the crossing time |
|
tcomp |
maximum computing time in hours |
|
qe |
energy tolerance |
|
alphal |
power-law index for initial mass function for masses smaller than breake mass: -1 - equal mass case |
|
alphah |
power-law index for initial mass function for masses greater than breake mass. If alphal=alphah the IMF does not have a break |
|
brakem |
the mass in which the IMF is broken. If brakem is smaller * than the minimum mass (bodyn) than the break mass is as for * the Kroupa mass function (brakem = 0.5 Mo) |
|
body1 |
maximum particle mass before scaling (solar mass) |
|
bodyn |
minimum particle mass before scaling (solar mass) |
|
fracb |
primordial binary fraction by number. nb = fracb*nt, * ns = (1 - fracb)*nt, nss = (1 + fracb)*nt * fracb > 0 - primordial binaries * fracb = 0 - only dynamical binaries |
|
amin |
minimum semi-major axis of binaries (in sollar units) * = 0 then amin = 2*(R1+R2), > 0 then amin = amin |
|
amax |
maximum semi-major axis of binaries (in sollar units) |
|
qvir |
virial ratio (qvir = 0.5 for equilibrium) |
|
rbar |
tidal radius in pc, halfmass radius in pc for isolated * cluster. No scaling - rbar = 1 |
|
zmbar |
total mass of the cluster in sollar mass, * no scaling zmbar = 1 |
|
w0 |
king model parameter |
|
bmin |
minimum value of sin(beta^2/2) |
|
bmax |
maximum value of sin(beta^2/2) |
|
tau0 |
time step for a complite cluster model |
|
gamma |
parameter in the Coulomb logarithm (standard value = 0.11) |
|
xtid |
coeficient in the front of cluster tidal energy: * -xtid*smt/rtid |
|
rplum |
for M67 rtid = rplum*rsplum (rsplum - scale radius for * plummer model) |
|
dttp |
time step (Myr) for profile output |
|
dtte |
time step (Myr) for mloss call for all objects |
|
dtte0 |
time step (Myr) for mloss call for all objects for tphys * less then tcrevo. For tphys greater then tcrevo time step * is eqiual to dtte |
|
tcrevo |
critical time for which time step for mloss call changes from * dtte0 to dtte |
|
xtau |
call mloss for a particlular object when * (uptime(im1) - olduptime(im1))/tau/tscale < xtau |
|
ytau |
multiplication of tau0 (tau = ytau*tau0) after time * greater than tcrevo |
|
ybmin |
multiplication of bmin0 (bmin = ybmin*bmin0) after time * greater than tcrevo |
|
zini |
initial metalicity (solar z = 0.02, globular clusters * M4 - z = 0.002, NGC6397 - z = 0.0002) |
|
ikroupa |
0 - the initial binary parameters are picked up * according Kroupa’s eigenevolution and feeding algorithm * (Kroupa 1995, MNRAS 277, 1507) * 1 - the initial binary parameters are picked as for M67 * model (Hurley et al. 2005) |
|
iflagns |
0 - no SN natal kiks for NS formation, 1 - SN natal kicks only for single NS formation, 2 - SN natal kick for single NS formation and NS formation in binaries |
|
iflagbh |
0 - no SN natal kiks for BH formation, 1 - SN natal kicks * only for single BH formation, 2 - SN natal kick for single * BH formation and BH formation in binaries |
|
nitesc |
0 - no iteration of the tidal radius and induced mass loss * due to stellar evolution, 1 - iteration of the tidal radius * and induced mass loss due to stellar evolution |
HiGPUs is a parallel direct N-body code based on a 6th order Hermite integrator. The code has been developed by Capuzzo-Dolcetta, Punzo and Spera (Dep. of Physics, Sapienza, Univ. di Roma; see astrowww.phys.uniroma1.it/dolcetta/HPCcodes/HiGPUs.html) and uses, at the same time, MPI, OpenMP and CUDA libraries to fully exploit all the capabilities offered by hybrid supercomputing platforms. Moreover, it is implemented using block time steps (individual time stepping) such to be able to deal with stiff problems like highly collisional gravitational N-body problems.
name |
default value |
unit |
description |
---|---|---|---|
eta_6 |
0.4 |
none |
eta parameter for determining stars time steps |
eta_4 |
0.01 |
none |
eta parameter for initializing blocks |
eps |
0.001 |
length |
smoothing parameter for gravity calculations |
r_scale_galaxy |
0.0 |
length |
scale radius for analytical galaxy potential |
r_core_plummer |
0.0 |
length |
core radius for analytical plummer potential |
mass_plummer |
0.0 |
mass |
total mass for analytical plummer potential |
start_time |
0.0 |
time |
initial simulation time |
min_step |
-30.0 |
none |
exponent which defi nes the minimum time step allowed for stars (2^exponent) |
max_step |
-3.0 |
none |
exponent which defi nes the maximum time step allowed for stars (2^exponent) |
n_Print |
1000000 |
none |
maximum number of snapshots |
dt_Print |
1.0 |
time |
time interval between diagnostics output |
n_gpu |
2 |
none |
number of GPUs per node |
gpu_name |
GeForce GTX 480 |
none |
GPUs to use |
Threads |
128 |
none |
number of gpus threads per block |
output_path_name |
../../test_re sults/ |
none |
path where HiGPUs output will be stored |
For more information about parameters check the readme file in the docs folder. These are the maximum performance (in Gflops) reached using different single GPUs installed, one at a time, on a workstation equipped with 2 CPUs Intel Xeon X5650, 12 GB of ECC RAM memory 1333 MHz, Ubuntu Lucid 10.04 x86_64, motherboard Supermicro X8DTG-QF:
TESLA C1060 : 107
TESLA C2050 : 395
TESLA M2070 : 391
GeForce GTX 480 : 265
GeForce GTX 580 : 311
To use the code with AMP GPUs you can download the OpenCL version from the website
Stellar evolution is performed by the rapid single-star evolution (SSE) algorithm. This is a package of analytical formulae fitted to the detailed models of Pols et al. (1998) that covers all phases of evolution from the zero-age main-sequence up to and including remnant phases. It is valid for masses in the range 0.1-100 Msun and metallicity can be varied. The SSE package contains a prescription for mass loss by stellar winds. It also follows the evolution of rotational angular momentum for the star. Full details can be found in the SSE paper:
Hurley J.R., Pols O.R., Tout C.A., 2000, MNRAS, 315, 543
min |
max |
unit |
|
---|---|---|---|
Mass |
0.1 |
100 |
Msun |
Metallicity |
0.0001 |
0.03 |
fraction (0.02 is solar) |
Metallicity of all stars (default value:0.02)
Reimers mass-loss coefficient (neta*4x10^-13; 0.5 normally) (default value:0.5)
The binary enhanced mass loss parameter (inactive for single). (default value:0.0)
Helium star mass loss factor (default value:0.5)
The dispersion in the Maxwellian for the SN kick speed (190 km/s). (default value:190.0 km / s)
ifflag > 0 uses white dwarf IFMR (initial-final mass relation) of HPE, 1995, MNRAS, 272, 800 (0). (default value:0)
wdflag > 0 uses modified-Mestel cooling for WDs (0). (default value:1)
bhflag > 0 allows velocity kick at BH formation (0). (default value:0)
nsflag > 0 takes NS/BH mass from Belczynski et al. 2002, ApJ, 572, 407 (1). (default value:1)
The maximum neutron star mass (1.8, nsflag=0; 3.0, nsflag=1). (default value:3.0 MSun)
The timesteps chosen in each evolution phase as decimal fractions of the time taken in that phase: MS (0.05) (default value:0.05)
The timesteps chosen in each evolution phase as decimal fractions of the time taken in that phase: GB, CHeB, AGB, HeGB (0.01) (default value:0.01)
The timesteps chosen in each evolution phase as decimal fractions of the time taken in that phase: HG, HeMS (0.02) (default value:0.02)
Binary evolution is performed by the rapid binary-star evolution (BSE) algorithm. Circularization of eccentric orbits and synchronization of stellar rotation with the orbital motion owing to tidal interaction is modelled in detail. Angular momentum loss mechanisms, such as gravitational radiation and magnetic braking, are also modelled. Wind accretion, where the secondary may accrete some of the material lost from the primary in a wind, is allowed with the necessary adjustments made to the orbital parameters in the event of any mass variations. Mass transfer also occurs if either star fills its Roche lobe and may proceed on a nuclear, thermal or dynamical time-scale. In the latter regime, the radius of the primary increases in response to mass-loss at a faster rate than the Roche-lobe of the star. Stars with deep surface convection zones and degenerate stars are unstable to such dynamical time-scale mass loss unless the mass ratio of the system is less than some critical value. The outcome is a common-envelope event if the primary is a giant star. This results in merging or formation of a close binary, or a direct merging if the primary is a white dwarf or low-mass main-sequence star. On the other hand, mass transfer on a nuclear or thermal time-scale is assumed to be a steady process. Prescriptions to determine the type and rate of mass transfer, the response of the secondary to accretion and the outcome of any merger events are in place in BSE and the details can be found in the BSE paper:
Hurley J.R., Tout C.A., & Pols O.R., 2002, MNRAS, 329, 897
Hurley J.R., Pols O.R., Tout C.A., 2000, MNRAS, 315, 543
min |
max |
unit |
|
---|---|---|---|
Mass |
0.1 |
100 |
Msun |
Metallicity |
0.0001 |
0.03 |
fraction (0.02 is solar) |
Period |
all |
all |
|
Eccentricity |
0.0 |
1.0 |
Metallicity of all stars (default value:0.02)
Reimers mass-loss coefficient (neta*4x10^-13; 0.5 normally) (default value:0.5)
The binary enhanced mass loss parameter (inactive for single). (default value:0.0)
Helium star mass loss factor (default value:1.0)
The common-envelope efficiency parameter (default value:1.0)
The binding energy factor for common envelope evolution (default value:0.5)
ceflag > 0 activates spin-energy correction in common-envelope. ceflag = 3 activates de Kool common-envelope model (0). (default value:0)
tflag > 0 activates tidal circularisation (1). (default value:1)
ifflag > 0 uses white dwarf IFMR (initial-final mass relation) of HPE, 1995, MNRAS, 272, 800 (0). (default value:0)
wdflag > 0 uses modified-Mestel cooling for WDs (0). (default value:1)
bhflag > 0 allows velocity kick at BH formation (0). (default value:0)
nsflag > 0 takes NS/BH mass from Belczynski et al. 2002, ApJ, 572, 407 (1). (default value:1)
The maximum neutron star mass (1.8, nsflag=0; 3.0, nsflag=1). (default value:3.0 MSun)
The random number seed used in the kick routine. (default value:29769)
The timesteps chosen in each evolution phase as decimal fractions of the time taken in that phase: MS (0.05) (default value:0.05)
The timesteps chosen in each evolution phase as decimal fractions of the time taken in that phase: GB, CHeB, AGB, HeGB (0.01) (default value:0.01)
The timesteps chosen in each evolution phase as decimal fractions of the time taken in that phase: HG, HeMS (0.02) (default value:0.02)
The dispersion in the Maxwellian for the SN kick speed (190 km/s). (default value:190.0 km / s)
The wind velocity factor: proportional to vwind**2 (1/8). (default value:0.125)
The wind accretion efficiency factor (1.0). (default value:1.0)
The Bondi-Hoyle wind accretion factor (3/2). (default value:1.5)
The fraction of accreted matter retained in nova eruption (0.001). (default value:0.001)
The Eddington limit factor for mass transfer (1.0). (default value:1.0)
The angular momentum factor for mass lost during Roche (-1.0). (default value:-1.0)
Single-star and binary evolution is performed by the rapid stellar evolution algorithm SeBa. Stars are evolved from the zero-age main sequence until remnant formation and beyond. Single stellar evolution is modeled with analytical formulae based on fits to detailed single star tracks at different metallicities (Hurley, Pols & Tout, 2000, 315, 543). Stars are parametrised by mass, radius, luminosity, core mass, etc. as functions of time and initial mass. Mass loss from winds, which is substantial e.g. for massive stars and post main-sequence stars, is included.
Furthermore, the SeBa package contains an algorithm for rapid binary evolution calculations. Binary interactions such as wind accretion, tidal interaction and angular momentum loss through (wind) mass loss, magnetic braking, or gravitational radiation are taken into account at every timestep with appropriate recipes. The stability and rate of mass transfer are dependent on the reaction to mass change of the stellar radii and the corresponding Roche lobes. If the mass transfer takes place on the dynamical timescale of the donor star, the mass transfer becomes quickly unstable, and a common-envelope phase follows. If mass transfer occurs in a stable way, SeBa models the response of the companion star and possible mass loss and angular momentum loss at every timestep. After mass transfer ceases in a binary system, the donor star turns into a remnant or a helium-burning star without a hydrogen envelope. When instead, the mass transfer leads to a merger between the binary stars, the resulting stellar product is estimated and the subsequent evolution is followed.
More information on SeBa can be found in the papers:
Relevant papers:
“Population synthesis of high-mass binaries” Portegies Zwart, S.F., Verbunt, F. 1996, 309, 179P
“Population synthesis for double white dwarfs . I. Close detached systems” Nelemans, G., Yungelson, L.R., Portegies Zwart, S.F., Verbunt, F. 2001, 365, 491N
“Supernova Type Ia progenitors from merging double white dwarfs. Using a new population synthesis model” Toonen, S., Nelemans, G., Portegies Zwart, S. 2012, 546A, 70T
“The effect of common-envelope evolution on the visible population of post-common-envelope binaries” Toonen, S., Nelemans, G. 2013, 557A, 87T
min |
max |
unit |
|
---|---|---|---|
Mass |
0.1 |
100 |
Msun |
Metallicity |
0.0001 |
0.03 |
fraction (0.02 is solar) |
Period |
all |
all |
|
Eccentricity |
0.0 |
1.0 |
Metallicity of all stats (default value:0.02)
Kick velocity to compact handler formed in supernova (default value:600 kms)
if True will log star state before and after evolve in starev.data (default value:False)
max wallclock time available for the evolve step (default value:4.0 s)
max inner loop evals (default value:1.0)
size of cube (default value:0.0 length)
minimum density of a gas particle (default value:-1.0 mass / (length**3))
maximum density of a gas particle (default value:-1.0 mass / (length**3))
minimum internal energy of a gas particle (default value:-1.0 length**2 / (time**2))
maximum internal energy of a gas particle (default value:-1.0 length**2 / (time**2))
if True use the center of mass to determine the location of the box, if False use (0,0,0), is not used by all codes (default value:False)
Evtwin is based on Peter Eggleton’s stellar evolution code, and actually solves the differential equations that apply to the interior of a star. Therefore it is more accurate, but also much slower than the analytic fits-based sse and seba algorithm explained above. Binaries are not yet supported in the AMUSE interface to evtwin, neither is the work-around for the helium flash. Currently only solar metallicity.
Relevant papers:
Eggleton, P.P. 1971, MNRAS, 151, 351
Eggleton, P.P. 1972, MNRAS, 156, 361
Eggleton, P.P. 1973, MNRAS, 163, 279
Eggleton, P.P., Faulkner, J., & Flannery, B.P. 1973, A&A, 23, 325
Han, Z., Podsiadlowski, P., & Eggleton, P.P. 1994, MNRAS, 270, 121
Pols, O.R., Tout, C.A., Eggleton, P.P., & Han, Z. 1995, MNRAS, 274, 964
Eggleton, P.P. 2001, Evolution of Binary and Multiple Star Systems, 229, 157
Nelson, C.A., & Eggleton, P.P. 2001, ApJ, 552, 664
Eggleton, P.P., & Kiseleva-Eggleton, L. 2002, ApJ, 575, 461
Stancliffe, Glebbeek, Izzard & Pols, 2007 A&A
Eldridge & Tout, 2004 MNRAS 348
Glebbeek, Pols & Hurley, 2008 A&A
The software project MESA (Modules for Experiments in Stellar Astrophysics, http://mesa.sourceforge.net/), aims to provide state-of-the-art, robust, and efficient open source modules, usable singly or in combination for a wide range of applications in stellar astrophysics. Since the package is rather big (about 800 MB download, >2 GB built), this community code is optional and does not install automatically. Set the environment variable DO_INSTALL_MESA and run make to download and install it. The AMUSE interface to MESA can create and evolve stars using the MESA/STAR module. If you order a metallicity you haven’t used before, starting models will be computed automatically and saved in the mesa/src/data/star_data/starting_models directory (please be patient…). All metallicities are supported, even the interesting case of Z=0. The supported stellar mass range is from about 0.1 to 100 Msun.
References:
Paxton, Bildsten, Dotter, Herwig, Lesaffre & Timmes 2010, ApJS submitted, arXiv:1009.1622
MESA chooser
Athena is a grid-based code for astrophysical hydrodynamics. Athena can solve magnetohydrodynamics (MHD) as well, but this is currently not supported from AMUSE. It was developed primarily for studies of the interstellar medium, star formation, and accretion flows.
compressible hydrodynamics and MHD in 1D, 2D, and 3D,
ideal gas equation of state with arbitrary γ (including γ = 1, an isothermal EOS),
an arbitrary number of passive scalars advected with the flow,
self-gravity, and/or a static gravitational potential,
Ohmic resistivity, ambipolar diffusion, and the Hall effect,
both Navier-Stokes and anisotropic (Braginskii) viscosity,
both isotropic and anisotropic thermal conduction,
optically-thin radiative cooling.
Cartesian or cylindrical coordinates,
static (fixed) mesh refinement,
shearing-box source terms, and an orbital advection algorithm for MHD,
parallelization using domain decomposition and MPI.
A variety of choices are also available for the numerical algorithms, such as different Riemann solvers and spatial reconstruction methods.
The relevant references are:
Gardiner & Stone 2005, JCP, 205, 509 (2D JCP Method)
Gardiner & Stone 2007, JCP, 227, 4123 (3D JCP Method)
Stone et al. 2008, ApJS, 178, 137 (Method)
Stone & Gardiner 2009, NewA, 14, 139 (van Leer Integrator)
Skinner & Ostriker 2010, ApJ, 188, 290 (Cylindrical Integrator)
Stone & Gardiner 2010, ApJS, 189, 142 (Shearing Box Method)
Capreole is a grid-based astrophysical hydrodynamics code developed by Garrelt Mellema. It works in one, two dimensions, and three spatial dimensions and is programmed in Fortran 90. It is parallelized with MPI. For the hydrodynamics it relies on the Roe-Eulderink-Mellema (REM) solver, which is an approximate Riemann solver for arbitrary metrics. It can solve different hydrodynamics problems. Capreole has run on single processors, but also on massively parallel systems (e.g. 512 processors on a BlueGene/L).
The reference for Capreole (original version):
Mellema, Eulderink & Icke 1991, A&A 252, 718
ratio of specific heats used in equation of state (default value:1.6666666666666667)
boundary conditions on first (inner, left) X boundary (default value:reflective)
boundary conditions on second (outer, right) X boundary (default value:reflective)
boundary conditions on first (inner, front) Y boundary (default value:reflective)
boundary conditions on second (outer, back) Y boundary (default value:reflective)
boundary conditions on first (inner, bottom) Z boundary (default value:reflective)
boundary conditions on second (outer, top) Z boundary (default value:reflective)
boundary conditions for the X directorion (default value:[‘reflective’, ‘reflective’])
boundary conditions for the Y directorion (default value:[‘reflective’, ‘reflective’])
boundary conditions for the Z directorion (default value:[‘reflective’, ‘reflective’])
number of processors for the x direction (default value:0)
number of processors for the y direction (default value:0)
number of processors for the z direction (default value:0)
number of processors for each dimensions (default value:[0, 0, 0])
number of cells in the x direction (default value:10)
number of cells in the y direction (default value:10)
number of cells in the z direction (default value:10)
length of model in the x direction (default value:10 length)
length of model in the x direction (default value:10 length)
length of model in the z direction (default value:10 length)
number of cells in the x, y and z directions (default value:[10, 10, 10])
length of the model in the x, y and z directions (default value:[10, 10, 10] length)
max wallclock time available for the evolve step (default value:4.0 s)
max inner loop evals (default value:1.0)
size of cube (default value:0.0 length)
minimum density of a gas particle (default value:-1.0 mass / (length**3))
maximum density of a gas particle (default value:-1.0 mass / (length**3))
minimum internal energy of a gas particle (default value:-1.0 length**2 / (time**2))
maximum internal energy of a gas particle (default value:-1.0 length**2 / (time**2))
if True use the center of mass to determine the location of the box, if False use (0,0,0), is not used by all codes (default value:False)
FI is a parallel TreeSPH code for galaxy simulations. Extensively rewritten, extended and parallelized, it is a development from code from Jeroen Gerritsen and Roelof Bottema, which itself goes back to Treesph.
The relevant references are:
Hernquist & Katz 1989, ApJS 70, 419
Gerritsen & Icke 1997, A&A 325, 972
Pelupessy, van der Werf & Icke 2004, A&A 422, 55
Pelupessy, PhD thesis 2005, Leiden Observatory
Interface for the Fi smoothed-particle hydrodynamics code.
GADGET-2 computes gravitational forces with a hierarchical tree algorithm (optionally in combination with a particle-mesh scheme for long-range gravitational forces, currently not supported from the AMUSE interface) and represents fluids by means of smoothed particle hydrodynamics (SPH). The code can be used for studies of isolated systems, or for simulations that include the cosmological expansion of space, both with or without periodic boundary conditions. In all these types of simulations, GADGET follows the evolution of a self- gravitating collisionless N-body system, and allows gas dynamics to be optionally included. Both the force computation and the time stepping of GADGET are fully adaptive, with a dynamic range which is, in principle, unlimited.
The relevant references are:
Springel V., 2005, MNRAS, 364, 1105 (GADGET-2)
Springel V., Yoshida N., White S. D. M., 2001, New Astronomy, 6, 51 (GADGET-1)
SimpleX (Delaunay triangulation based)
SimpleX computes the transport of radiation on an irregular grid composed of the Delaunay triangulation of a particle set. Radiation is transported along the vertices of the triangulation. The code can be considered as a particle based radiative transfer code: in this case particles sample the gas density, but can be both absorbers and sources of radiation. Calculation time with SimpleX scales linearly with the number of particles. At the moment the code calculates the transport of ionizing radiation in the grey (one frequency) approximation. It is especially well suited to couple with SPH codes.
particle sets send to SimpleX must have attributes x [pc], y[pc], z[pc], rho [amu/cm**3], flux [s**-1] and xion [none].
care must be taken that the particle sets fit in the box_size
the default hilbert_order should work for most particle distributions
References:
Paardekooper J.-P., 2010, PhD thesis, University of Leiden
Paardekooper J.-P., Kruip, C. J. H., Icke V., 2010, A&A, 515, 79 (SimpleX2)
Ritzerveld, J., & Icke, V. 2006, Phys. Rev. E, 74, 26704 (SimpleX)
Work in progress