1. Code Name: M3D-C1
2. Code Category: 3D Nonlinear 2F MHD
3. Primary Developer: N. Ferraro, S. Jardin
4. Other Developers and Users: J. Breslau, S. Hudson, student, J. Chen, RPI SCOREC group
5. Short description: Implicit 3D 2F MHD code primarily designed for highly magnetized toroidal geometry
6. Computer Language: Fortran90 ~ 30,000 lines.
7. Type of input required (including files and/or output from other codes). Is there any special input preparation system (eg, GUI): Namelist input: C1input Mesh file: struct-curveDomain.sms and AnalyticModel Can accept geqdsk and jsolver file (POLAR) equilibrium files For resistive wall problem, also needs vacuum matrices
8. Type of output produced: Graphic file: C1.h5 Restart file
9. Describe any postprocessors which read the output files: There is a comprehensive IDL graphics utility that reads the C1.h5 file. There is also now a VISIT capability that reads that file.
10. Status and location of code input/output documentation: SVN repo at Princeton
11. Code web site? No
12. Is code under version control? What system? Is automated regression testing performed? Yes. SVN with automated regression testing
13. One to two paragraph description of equations solved and functionality including what discretizations are used in space and time: The M3D-C1 code is a modern, implicit, three-dimensional simulation code for solving the 2-Fluid MHD equations. It is applicable to a wide range of configurations, but as described in [1], it is especially effective when applied to toroidal geometry where a strong toroidal magnetic field is present. Besides solving the full set of 2F MHD equations, there are options to solve two energy-conserving sets of 2-field and 4-field reduced MHD equations. The code also has a "linear option" where Fourier analysis is performed in the toroidal direction and linear toroidal eigenmodes are computed. An option of a toroidal "resistive wall" surrounding the computation domain, and itself being surrounded by a true vacuum, is now being added.
14. What modes of operation of code are there? Either Linear or Nonlinear. Either 2 variable, 4 variable, or 6 variable(full) MHD. Either cylindrical or toroidal geometry.
15. Journal references describing code: N. M. Ferraro, S. C. Jardin, "Calculations of two-fluid MHD axisymmetric steady-states", 228 (2009) 7742-7770 J. Breslau, N. Ferraro, and S. Jardin, "Some properties of the M3D-C1 form of the 3D MHD equations", Phys. Plasmas 16, 092503 (2009)
16. Codes it is similar to and differences:
M3D: M3D-C1 uses a similar decomposition of the vector fields as does M3D, but uses high-order finite elements and a fully implicit time advance which allows it to take much larger timesteps.
NIMROD: M3D-C1 uses high-order finite elements with C1 continuity in all three dimensions, whereas NIMROD uses high-order C0 elements only in the (R,Z) plane and uses Fourier modes in the toroidal direction. A similar implicit time advance is used in both codes, but the difference in the spatial representation leads to a large difference in the sparse matrix equation that needs to be solved each timestep.
17. Results of code verification and convergence studies:
[1] Basic convergence studies: S. C. Jardin, J. Breslau, and N. Ferraro, "A high-order implicit finite element method for integrating the 2F MHD equations in 2D", J. Comput. Phys. 226, 2146 (2007)
[2] We benchmarked this code with SEL and with NIMROD on the resistive MHD and 2-fluid Hall MHD GEM nonlinear reconnection problems. See http://w3.pppl.gov/cemm/gem.htm
[3] We agreed with an analytic calculation for the gyroviscous stabilization of the gravitational instability. N. Ferraro and S. C. Jardin, "Finite Element implementation of Braginskii's gyroviscous stress with application to the gravitational instability", Phys. Plasmas 13, 092101 (2006)
[4] We agreed with analytic formulas for Pfirsch-Schluter diffusion in a torus. N. M. Ferraro and S. C. Jardin, "Calculations of two-fluid magnetohydrodynamic axisymmetric steady-states", J. Comput. Phys. 228 (2009) 7742-7770.
[5] We performed successful benchmarking with the ELITE and GATO codes for the growth rate of ideal edge localized modes in the ideal MHD limit for a wide range of toroidal mode numbers. N. M. Ferraro, S. C. Jardin, P. Snyder, "Ideal and resistive edge stability calculations with M3D-C1", Phys. Plasmas 17, 102508 (2010)
[6] We have a resistive MHD benchmark of a tearing mode with the original M3D code. This has not yet been published.
18. Present and recent applications and validation exercises (with references as available):
The primary users now are the code developers, J. Breslau and S. C. Jardin (Princeton) and N. Ferraro (General Atomics), and also Stuart Hudson and a graduate student.

Breslau is studying the onset of internal resistive/ideal modes in NSTX when where , and in the benchmarking of M3D-C1 with the M3D code.

Jardin is focusing on high-S tearing and double tearing modes, including 2-fluid effects and the effects of equilibrium toroidal rotation.

Ferraro is studying resistive and 2-fluid effects on ELM stability and the effect of resonant field perturbations.

Recent studies with the code include:

Axisymmetric two-fluid equilibria of DIII-D and NSTX plasmas (see 17[4])

Two-fluid reconnection with a strong guide field [S. Jardin , C. Sovinec , J. Breslau , N. Ferraro , S. Hudson , J. King , S. Kruger J. Ramos , D. Schnack , "Two-Fluid and Resistive Nonlinear Simulations of Tokamak Equilibrium, Stability, and Reconnection", Paper TH/P9-29 , IAEA Meeting, Geneva, Switzerland 2008]

Linear stability of astrophysical plasmas [N. M. Ferraro, "Finite Larmor radius effects on the magneto-rotational instability", Astrophysical Journal 662 (2007) p 512-516]

19. Limitations of code parameter regime (dimensionless parameters accessible): We have been able to run resistive and two-fluid MHD with Lundquist numbers up to 10^8 and still obtain convergent solutions. It may be possible to exceed this, but we have not focused on this. We have also run with the ratio of parallel to perpendicular thermal conductivities up to 10^8. CC's numerical methods have proven stable and effective on both MHD and transport timescales.
20. What third party software is used? (eg. Meshing software, PETSc, ...): SCOREC, mpich2, zoltan, parmetis, petsc, superlu, SVN, VACUUM
21. Description of scalability: Our representations and algorithms are very effective compared to more primitive and lower order representations. The set of scalar variables and projection operators for the vector momentum and magnetic field evolution equations that is used has several unique and desirable properties, making it a preferred system for solving the magneto-hydrodynamics equations in a torus with a strong toroidal magnetic field. The reasons for this are described in [1]. The high order elements mean that many fewer elements are needed as compared to linear elements and thus the associated matrices are smaller. The use of C1 elements also reduces the size of the matrices since terms that have spatial derivatives up to 4th order can be represented directly without having to introduce intermediate variables that would increase the size of the matrices. This is described more quantitatively in [6]. Processor scaling has not yet been studied for the 3D nonlinear code as we are just now implementing the full non-linear 3D capability. Our goal is to have about 100p per plane and about 100 toroidal planes for a high resolution case, which would be 10^4 processors.
22. Major serial and parallel bottlenecks: We expect the PETSc sparse matrix solves to be the major bottlenecks
23. Are there smaller codes contained in the larger code? Describe: No.
24. Supported platforms and portability: STIX and NERSC
25. Illustrations of time-to-solution on different platforms and for different complexity of physics, if applicable:

M3D-C11. Code Name: M3D-C1

2. Code Category: 3D Nonlinear 2F MHD

3. Primary Developer: N. Ferraro, S. Jardin

4. Other Developers and Users: J. Breslau, S. Hudson, student, J. Chen, RPI SCOREC group

5. Short description: Implicit 3D 2F MHD code primarily designed for highly magnetized toroidal geometry

6. Computer Language: Fortran90 ~ 30,000 lines.

7. Type of input required (including files and/or output from other codes). Is there any special input preparation system (eg, GUI): Namelist input: C1input Mesh file: struct-curveDomain.sms and AnalyticModel Can accept geqdsk and jsolver file (POLAR) equilibrium files For resistive wall problem, also needs vacuum matrices

8. Type of output produced: Graphic file: C1.h5 Restart file

9. Describe any postprocessors which read the output files: There is a comprehensive IDL graphics utility that reads the C1.h5 file. There is also now a VISIT capability that reads that file.

10. Status and location of code input/output documentation: SVN repo at Princeton

11. Code web site? No

12. Is code under version control? What system? Is automated regression testing performed? Yes. SVN with automated regression testing

13. One to two paragraph description of equations solved and functionality including what discretizations are used in space and time: The M3D-C1 code is a modern, implicit, three-dimensional simulation code for solving the 2-Fluid MHD equations. It is applicable to a wide range of configurations, but as described in [1], it is especially effective when applied to toroidal geometry where a strong toroidal magnetic field is present. Besides solving the full set of 2F MHD equations, there are options to solve two energy-conserving sets of 2-field and 4-field reduced MHD equations. The code also has a "linear option" where Fourier analysis is performed in the toroidal direction and linear toroidal eigenmodes are computed. An option of a toroidal "resistive wall" surrounding the computation domain, and itself being surrounded by a true vacuum, is now being added.

14. What modes of operation of code are there? Either Linear or Nonlinear. Either 2 variable, 4 variable, or 6 variable(full) MHD. Either cylindrical or toroidal geometry.

15. Journal references describing code: N. M. Ferraro, S. C. Jardin, "Calculations of two-fluid MHD axisymmetric steady-states",

228(2009) 7742-7770 J. Breslau, N. Ferraro, and S. Jardin, "Some properties of the M3D-C1 form of the 3D MHD equations", Phys. Plasmas16, 092503 (2009)16. Codes it is similar to and differences:

M3D: M3D-C1 uses a similar decomposition of the vector fields as does M3D, but uses high-order finite elements and a fully implicit time advance which allows it to take much larger timesteps.

NIMROD: M3D-C1 uses high-order finite elements with C1 continuity in all three dimensions, whereas NIMROD uses high-order C0 elements only in the (R,Z) plane and uses Fourier modes in the toroidal direction. A similar implicit time advance is used in both codes, but the difference in the spatial representation leads to a large difference in the sparse matrix equation that needs to be solved each timestep.

17. Results of code verification and convergence studies:

[1] Basic convergence studies: S. C. Jardin, J. Breslau, and N. Ferraro, "A high-order implicit finite element method for integrating the 2F MHD equations in 2D", J. Comput. Phys.

226, 2146 (2007)[2] We benchmarked this code with SEL and with NIMROD on the resistive MHD and 2-fluid Hall MHD GEM nonlinear reconnection problems. See http://w3.pppl.gov/cemm/gem.htm

[3] We agreed with an analytic calculation for the gyroviscous stabilization of the gravitational instability. N. Ferraro and S. C. Jardin, "Finite Element implementation of Braginskii's gyroviscous stress with application to the gravitational instability", Phys. Plasmas

13, 092101 (2006)[4] We agreed with analytic formulas for Pfirsch-Schluter diffusion in a torus. N. M. Ferraro and S. C. Jardin, "Calculations of two-fluid magnetohydrodynamic axisymmetric steady-states", J. Comput. Phys.

228(2009) 7742-7770.[5] We performed successful benchmarking with the ELITE and GATO codes for the growth rate of ideal edge localized modes in the ideal MHD limit for a wide range of toroidal mode numbers. N. M. Ferraro, S. C. Jardin, P. Snyder, "Ideal and resistive edge stability calculations with M3D-C1", Phys. Plasmas

17, 102508 (2010)[6] We have a resistive MHD benchmark of a tearing mode with the original M3D code. This has not yet been published.

18. Present and recent applications and validation exercises (with references as available):

The primary users now are the code developers, J. Breslau and S. C. Jardin (Princeton) and N. Ferraro (General Atomics), and also Stuart Hudson and a graduate student.

Recent studies with the code include:

662(2007) p 512-516]19. Limitations of code parameter regime (dimensionless parameters accessible): We have been able to run resistive and two-fluid MHD with Lundquist numbers up to 10^8 and still obtain convergent solutions. It may be possible to exceed this, but we have not focused on this. We have also run with the ratio of parallel to perpendicular thermal conductivities up to 10^8. CC's numerical methods have proven stable and effective on both MHD and transport timescales.

20. What third party software is used? (eg. Meshing software, PETSc, ...): SCOREC, mpich2, zoltan, parmetis, petsc, superlu, SVN, VACUUM

21. Description of scalability: Our representations and algorithms are very effective compared to more primitive and lower order representations. The set of scalar variables and projection operators for the vector momentum and magnetic field evolution equations that is used has several unique and desirable properties, making it a preferred system for solving the magneto-hydrodynamics equations in a torus with a strong toroidal magnetic field. The reasons for this are described in [1]. The high order elements mean that many fewer elements are needed as compared to linear elements and thus the associated matrices are smaller. The use of C1 elements also reduces the size of the matrices since terms that have spatial derivatives up to 4th order can be represented directly without having to introduce intermediate variables that would increase the size of the matrices. This is described more quantitatively in [6]. Processor scaling has not yet been studied for the 3D nonlinear code as we are just now implementing the full non-linear 3D capability. Our goal is to have about 100p per plane and about 100 toroidal planes for a high resolution case, which would be 10^4 processors.

22. Major serial and parallel bottlenecks: We expect the PETSc sparse matrix solves to be the major bottlenecks

23. Are there smaller codes contained in the larger code? Describe: No.

24. Supported platforms and portability: STIX and NERSC

25. Illustrations of time-to-solution on different platforms and for different complexity of physics, if applicable: