There will be some changes in the number of resources provided by the entire system (upper limit of resource occupation, etc.) in FY2020 from FY2019. Please refer to the usage plan for FY2020.
(Amended on 4.16) We tentatively loosen the limit of the number of compute nodes per user on weekdays to 72 from 30, as there are sufficient compute nodes available. We'll change this limit when the compute nodes become congested.
(Amended on 7.2) Due to congestion of compute nodes, the number of concurrent operations per user on weekdays has been increased. Limit the number of jobs 72 to 50.
・Summer season electricity period (July-September)
As in last year, the number of provided nodes will not change from normal times. However, it may change if there is a strong demand for power saving from universities or society.In recent years, the University has been requesting cooperation on energy saving from July 1 to September 30 from University's Energy Conservation Promotion Office in late June. In TSUBAME2, the queue configuration was changed, and peak shift operation was performed, in which the number of nodes that can be used during the day and at night changes. However, as for TSUBAME3, the degree of freedom of the queue itself has increased, and significant power saving has always been achieved. In FY2018 and FY2019, no special operation was performed.
・Normal period (other than July-September)
Executing a normal job
Considering that the utilization rate tends to be lower on weekends than on weekdays, reduce the number of parallel and concurrent executions on weekends.
|Number of jobs running simultaneously per user||100 jobs|
|Number of slots running simultaneously per user (number of CPU cores)||2016 slots||4032 slots|
|Maximum parallelism per job||144(*3)||144|
|Maximum execution time per job||24 hours||24 hours|
*1：weekdays: Jobs that start every Sunday between 9:00 and Friday 16:00
*2：weekends: Jobs that start between Friday 16:00 and Sunday 9:00. However, holidays are not considered to simplify processing.
*3：Although 144 parallels are used for implementation reasons, note that the maximum of 72 parallels in f_node is the same as before because there is a limit of 2016 slots.
There is no change from FY 2019.
|Setting for April-September||October to next March setting|
|Number of reserved nodes (total)||270 nodes||135 nodes|
|Maximum reservation time for 1 reservation||168 hours(7 days)||96 hours(4 days)|
|Total reservation slots that one group can secure at the same time||12960 node hours||6480 node hours|
・Upper limit of the amount of memory for resource type f_node
To ensure the stable operation of the node, the upper limit of the amount of memory for f_node by the batch job scheduler will be changed from 240G to 235G in April 2020. For more information, click here.