|SCHEDULED MAINTENANCE 8th of January 2018|
Dear MonARCH user,
Please be advised of a scheduled maintenance of MonARCH on the 8th of January, 2018, from 8:00 AM to 5:00 PM.
This outage is necessary to expand the capacity of the file system, as we will be physically installing additional storage to the existing Lustre infrastructure.
This maintenance requires an orderly shutdown of the Lustre file service; thus all nodes will be drained of running jobs and ssh/scp access to the login node will not be available. A reservation is now active on the cluster to ensure jobs do not commence if they will not complete before the outage. Pending jobs will remain on the queue until we resume the service.
We apologise for the inconvenience caused and we thank you for your understanding. For any concerns regarding this maintenance please contact firstname.lastname@example.org.
The Monash HPC Team
|IMPORTANT ANNOUNCEMENT ABOUT MonARCH V2|
Subject: A new and improved MonARCH cluster is in preparation
Dear MonARCH user,
We wish to advise you that in the next few weeks, we will be provisioning a new MonARCH cluster. The new MonARCH cluster will continue to serve the university’s HPC users as its primary community, and remain distinct and independent from MASSIVE M3. However, it will be closely aligned with M3. Specifically, the new MonARCH will feature:
We have scheduled the release of the new MonARCH on the 23rd of October 2017. The current MonARCH will remain available for about four weeks after this release date, to ensure that your jobs will run successfully on either cluster. Rest assured, we will put every effort necessary to ease your transition to the new MonARCH cluster.
This is the culmination of a major undertaking to align the operations of MonARCH and MASSIVE into a single configuration and management framework, reducing system heterogeneity, and thus enhancing our ability to provide better HPC user support.
Further details of the new MonARCH will be made available closer to the release date. For any queries or concerns, please feel free to contact us at email@example.com.
The Monash HPC Team
October 1 2017 Update:
To stay updated with the development of MonARCH v2, please visit our "work in progress" page: Work In Progress Information on MonARCH v2
MonARCH (Monash Advanced Research Computing Hybrid) is the next-generation HPC/HTC Cluster, designed from the ground up to address the emergent and future needs of the Monash HPC community.
A key feature of MonARCH is that it is provisioned through R@CMon, the Research Cloud @ Monash facility. Through the use of advanced cloud technology, MonARCH is able to configure and grow dynamically. As with any HPC cluster, MonARCH presents a single point-of-access to computational researchers to run calculations on its constituent servers.
MonARCH aims to continually develop over time. Currently, it consists of 35 servers under two complementary hardware specifications:
- high-core servers - two Haswell CPU sockets with a total of 24 physical cores (or 48 hyperthreaded cores) at 2.80 GHz
- high-speed servers - two Haswell CPU sockets with a total of 16 physical cores (or 32 hyperthreaded cores) at 3.20 GHz
For data storage, we have deployed a parallel file system service using Intel Enterprise Lustre; providing over 300 TB usable storage with room for future expansion.
The MonARCH service is operated by the Monash HPC team and continuing technical and operational support from the Monash Cloud team, and eSolutions Servers-and-Storage, and Networks teams.
If you have found the MonARCH useful for your research, we will be very grateful if you kindly acknowledge us with a text along the lines of:
This research was supported in part by the Monash eResearch Centre and eSolutions-Research Support Services through the use of the MonARCH HPC Cluster.
Applying for Access
MonARCH is available to all Monash Researchers. To apply for access, please visit this access page for self service instructions. For any assistance, please email firstname.lastname@example.org,