The final competition will be held at Dalian University of Technology, China from April 21 to April 25, 2019. The competition consists of HPL, HPCG, CESM, Face SR, Group competition, wtdbg2, a mystery application, and team presentation.
Please find the agenda below.

Date

Time

Content

April 20

Check-in

April 21

08:00-20:00

Announcement of contest rules, cluster building and testing

The runtime power consumption of the entire cluster is not limited.

April 22

08:00-20:00

Cluster building and testing

April 23

08:00-18:00

Performance testing of HPL, HPCG, Group competition, CESM

The runtime power consumption of the entire cluster should be less than 3000W

April 24

08:00-18:00

wtdbg2, Face SR, Mystery APP

April 25

08:30-12:15

Team Presentation

14:30-18:30

The Awards Ceremony

ASC19 Student Supercomputer Challenge Final Competition Notification
Rules of the final stage
1. Optimization methods that are only applicable to specific parameters or input data are strictly prohibited.
2. If there are any modifications on the algorithm, the new algorithm must be mathematically equivalent to the original one.
3. If any rules given above are violated, a score of zero will be given for the corresponding task.
Note: when in doubt, a team needs to submit a query to the contest committee before the competition on whether a specific optimization method violates the rules, and a decision will be made by the evaluation committee before the competition. Otherwise, the team will have no chance to provide further explanations when its optimization method is ruled out by the evaluation committee during the competition.
Group competition
The new competition in groups will be introduced in the finals of ASC19. The groups will be formed by different teams based on the draw on spot. The group competition application will be completed through full cooperation within the group. This group competition’s result will not be included in the finals total results. However, teams in the winning group will receive the group competition awards and the related bonuses. (See Appendix II)

Arrangement of the final stage
Every team is required to fill and submit Hardware Platform Equipment of the Final Stage (Appendix I) to the technical supporting email ([email protected]) of this contest by April 2, the committee will prepare the corresponding equipment for the teams to use in the final competition.
Notes on Collaboration between ASC and ISC-HPCAC Student Cluster Competition (SCC)
ASC Student Supercomputer Challenge (ASC) and ISC-HPCAC Student Cluster Competition (SCC) agree to collaborate on the competition. Under this collaboration, champion of ISC-HPCAC SCC will earn direct place in the final round of next year's ASC; while the first two winning teams from ASC final will secure direct places in ISC-HPCAC SCC in the same year. In addition, qualification for ISC-HPCAC SCC shall be given to other ASC finalists according to final ranking, in any of following circumstances:
Any of the two ASC winning teams has entered ISC-HPCAC SCC for reasons including but not limited to previous championships in ISC or SC SCC.
2.  Any of the two ASC winning teams has given up on ISC-HPCAC SCC qualification.
The clauses above apply to all ASC19 teams.

Appendix I

ASC19 Student Supercomputer Challenge
——Hardware Platform and Equipment of the Final Stage
Restriction of power consumption and hardware platform

  1. The runtime power consumption of every team must be under 3000W, otherwise the current task result becomes invalid. Within the limit of the power consumption, the team should design the system to achieve the best performance of test applications.
  2. All teams should build their design based on the Inspur NF5280M5 server. The components listed in the table below will be provided by Inspur. The teams can also choose to use other components (except the server itself) at their own costs. (The NF5280M5 has one 8PIN power cable for each Pascal and Volta GPU. The NF5280M5 server can serve four GPUs at most.). During the final contest, the system platform cannot be rebooted, or changed. Every team is required to fill and submit Hardware Platform Equipment of the Final Stage (like the table below) to [email protected] of this contest by April 2. The configuration may be changed due to unforeseen circumstances.

Item

Name

Configuration

Server

Inspur
NF5280M5

CPU: Intel Xeon Gold 6230 x 2,2.1GHz,20 cores
Memory: 32G x 12,DDR4,2933Mhz
Hard disk: 480G SSD SATA x 1

HCA card

FDR

Infiniband Mellanox ConnectX®-3 HCA card, single port QSFP, FDR IB

Switch

GbE switch

10/100/1000Mb/s,24 ports Ethernet switch

FDR-IB switch

SwitchX™ FDR InfiniBand switch, 36 QSFP port

Cable

Gigabit CAT6 cables

CAT6 copper cable, blue, 3m

Infiniband cable

Infiniband FDR copper cable, QSFP port, cooperating with the Infiniband switch for use

Appendix II
ASC19 Student Supercomputer Challenge
--Technical Regulation and Evaluation Criteria of the Final Stage
ARestrictions
All of the contest applications shall be run on each team’s cluster on site:
The power consumption must be under 3000W. Otherwise, no result will be accepted.
BGroup competition

  1. By drawing on the spot, each team will get an ID number, and all the members of the teams with the same ID number will form a group of the same ID.  Each group will consist of 4 teams. The draw will take place on the morning of the first day of cluster construction.
  2. Group competition application and its workloads will be announced on the first day of the competition. Each team can work together to finish the application within the group, but each team still must run and finish each workload of the application on its own cluster which should not be operated directly or remotely by members from other teams of the group. The average summation of the results of each team in the group will make the result of that group.
  3. Each team within the group can work together to perform application compilation, debug, optimization, and/or discussions, however, only options related to the parallel settings can be modified in the input files or command lines. Other modifications of the workloads are prohibited. Every workload result is required to pass the checking for correctness and to achieve the shortest runtime of all workloads.
  4. The power restriction of the test platform is 3000W. If the power consumption of system exceeds 3000W during the contest, the current task result becomes invalid.
  5. The results of the group competition will be announced in the morning of the second day of the competition. The winning group will be awarded the group competition prizes and corresponding bonuses. Each team in the winning group should share the bonuses equally.
  6. The group competition result will not be included in the finals total results.

CPerformance Optimization (90 points)
I. HPL performance optimization (9 points):

  1. Platform requirement: The runtime power consumption must be under 3000W. Otherwise, the current task result becomes invalid. 
  2. Goal: The highest performance is the goal while passing the correctness checking.
  3. Software downloading: http://www.netlib.org/benchmark/hpl/

II. Performance optimization of HPCG (9 points):

  1. Platform requirement: The runtime power consumption must be under 3000W. Otherwise, the current task result becomes invalid.
  2. About run time: HPCG (version 3.0) runs must be at least 1800 seconds (30 minutes) as reported in the output file. The Quick Path option is not allowed.
  3. Software downloading: http://www.hpcg-benchmark.org/software/index.html

III. Performance optimization of CESM (18 points):

  1. Platform requirement:  The power restriction of the test platform is 3000W. If the power consumption of system exceeds 3000W during the contest, the current task result becomes invalid.
  2. Goal: The committee will announce several CESM workloads during the finals. Every team can only modify the options related to the parallel setting. Other modifications of the workloads are prohibited. The team needs to pass the correctness checking of each workload, and the goal is to achieve the shortest runtime of each workload.
  3. Software downloading: https://svn-ccsm-models.cgd.ucar.edu/cesm1/release_tags/cesm1_2_2/  (version 1.2.2 Stable)

IV. Performance optimization of WTDBG (18 points):

  1. Platform requirement: The power restriction of the test platform is 3000W. If the power consumption of system exceeds 3000W during the contest, the current task result becomes invalid. 
  2. Goal: The committee will announce several wtdbg2 workloads during the finals. Every team can only modify the options related to the parallel setting or clearly specified. Other modifications of the workloads are prohibited. The team needs to accomplish all the workloads pass the correctness checking of each workload, and the goal is to achieve the shortest runtime of each workload.
  3. WTDBG source code downloading: https://github.com/ruanjue/wtdbg2/releases/tag/v2.3

V. Performance optimization of the Mystery Application (18 points):

  1. Platform requirement: The power restriction of the test platform is 3000W. If the power consumption of system exceeds 3000W during the contest, the current task result becomes invalid.  
  2. Goal: The committee will announce the Mystery Application software and the corresponding workloads on site to all the teams at the same time. Each team can then perform application compilation and optimization; every team can only modify the options related to the parallel setting. Other modifications of the workloads are prohibited. Every workload result is required to pass the correctness checking and to achieve the shortest runtime of all workloads.

VI. Face Super Resolution Challenge (18 points):

  1. Goal: Face Super Resolution (FSR), also known as face hallucination, is a domain-specific super-resolution problem. As a specific problem of Super-Resolution (SR), the aim of FSR is to generate high-resolution (HR) face images from low-resolution (LR) face images. One of the ultimate goals in FSR is to explore image intensity correspondences between LR and HR faces from large scale dataset and generate HR face images closed to the ground truth HR face images. In the final competition, the participant should design/tuning their algorithm designed in the preliminary competition to do the 4x FSR upscaling for face images which were down-sampled with a bicubic kernel. For instance, the resolution of a 400x600 image after 4x upscaling is 1600x2400. An example is given below, left is HR face image which resolution is 128x128, and right is the 4x down-sampling image which resolution is 32x32.

    1. On the spot in the final competition, the committee will supply scoring script, training dataset and test dataset. all test-dataset face images have identical resolution.
    2. Each team should submit all of the reconstructed high-resolution face images of test dataset for scoring test. The goal is to achieve the identity similarity (IS) value close to 1. IS is the cosine similarity of the two feature vectors of the HR face and SR face, while the feature vector is extracted from the 512-D embedding feature of SphereFace model (https://github.com/clcarwin/sphereface_pytorch ).
    3. Each team is required to use PyTorch for this task. Any other deep learning framework will be prohibited.

 

  1. Platform requirement: The power restriction of the test platform is 3000W. If the power consumption of system exceeds 3000W during the contest, the current task result becomes invalid.

D Evaluation method of the Applications:

Applications

Points

Evaluation method

Group competition

100

, where N is the number of workloads,  is the full score of the jth workload, the score Pse  of each group will be given as:

Where  is the runtime of the jth workload achieved by the ith team within a group, and  is the minimum among all the participating teams.

Performance Optimization
(90 points)

HPL

9

Let  be the actual performance of each team in which  is the maximum of all teams, the score P1 will be given as:

Where  if the team gets correct result, or  if the team gets no results or invalid result.

HPCG

9

P2 is calculated in the same way as P1 in HPL.

CESM

18

, where N is the number of workloads,  is the full score of the ith workload, the score P3 will be given as:

Where  is the runtime of the ith workload, and  is the minimum among all the participating teams. Where  if the team gets correct result, or  if the team gets no results or invalid result.

WTDBG

18

P4 can be seen on the scoring criteria issued on site for details.

Mystery  Application

18

P5 is calculated in the same way as P3 in CESM

Face Super Resolution Challenge

18

The score is calculated based on the formula below:

where  is the identity similarity, and the  denotes the highest IS value for all the teams achieved,  represent the IS value achieved by bicubic method. The IS gained by each team should greater than  , otherwise, the score will be zero. Therefore, the score value will locate in the range . The  value could be calculated as follows:

in which  is the feature vector extracted by HR face image,  denotes the feature vector extracted by SR face image. These two images will have a high similarity if  and these two images will have a terrible similarity if .

Performance Total Points

ETeam Presentation (10 points)

  1. Each team should provide a presentation in the manner of PPT by the order decided in a draw. Both the text and the speech should be given in English with up to two student speakers.
  2. The presentation should be given within 10 minutes; time-out will lower your score accordingly. The judges will ask questions for about 3-5 minutes after the presentation.
  3. The evaluation committee will evaluate the presentation of every team, and the full score is 10 points.
  4. The team advisors can observe his/her team’s presentation session.
Contact Us
Technical Support Yu Liu, Weiwei Wang [email protected]
Media Jie He [email protected]
Collaboration Vangel Bojaxhi [email protected]
General Information [email protected]

 

Partners      Follow us
Copyright 2019 Asia Supercomputer Community. All Rights Reserved