Skip to content

Commit b424aa9

Browse files
gr125IllusiveAldebaran
authored andcommitted
minor changes to namd
1 parent ac9c998 commit b424aa9

1 file changed

Lines changed: 2 additions & 3 deletions

File tree

content/posts/sc25-scc24-post-mortem.md

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -262,12 +262,11 @@ There's a very important lesson to take from the mystery benchmark: **You can’
262262

263263
### NAMD
264264

265-
The NAMD tasks were fun; they were designed to emulate real scientific research done with the software, and a significant part of the tasks involved scientific analysis. The tasks tested a wide range of our knowledge: there were two physical chemistry simulations, two replica exchange simulations (one of which could only be run using the CPU mode), and a benchmarking challenge that allowed us to change any and every parameter we wanted in order to get the fastest complete benchmark run. These tasks were very different from what we (or any of the teams, really) had prepared for, which fostered a lot of communication within teams to troubleshoot and get simulations working.
265+
The NAMD tasks were fun; they were designed to emulate real scientific research done with the software, and a significant part of the tasks involved scientific analysis. The tasks tested a wide range of our knowledge: there were two physical chemistry simulations, two replica exchange simulations (one of which could only be run using the CPU mode), and a benchmarking challenge that allowed us to change any and every parameter we wanted in order to get the fastest complete benchmark run. These tasks were very different from what we (or any of the teams, really) had prepared for, which fostered a lot of communication with multiple other teams who were very willing to discuss issues and troubleshoot.
266266

267267
From early on, it was clear that we would not be able to finish all the tasks in the given time, as many of the simulations took **hours** to run. Additionally, our hardware put us at a disadvantage compared to other teams (for example, one of the replica exchange simulations would have taken more than 12 hours to complete for us, compared to 6-8 hours on other clusters). There was a lot of difficult decision-making involved, and incomplete results were submitted for most of the tasks.
268268

269-
> I think that overall, we did the best we could have given the situation, and our results reflected that. The experience was challenging but fun, and I feel like I walked away with a better understanding of molecular dynamics research. I also felt a strong sense of community among the teams in regards to NAMD; not one of the five tasks were easy to complete, and while troubleshooting I talked to multiple other teams who were very willing to discuss issues and come up with a solution together.
270-
> <br> &emsp;&emsp; &ndash; Gauri
269+
Overall, we did the best we could have given the situation, and our results reflected that. Even though our preparation did not directly correlate with many of the tasks, it left us prepared to adapt and tackle each one of them, a fact that was also reflected in our team interview with the application judge (he said that it was we knew what we were talking about!). We submitted the task results with no major issues, with about an hour left in the competition.
271270

272271
### ICON
273272
During the competition, the task we were given for ICON turned out to be really interesting: With a time limit of 3 hours, measured with timestamp logging in our output submission file. Within these 3 hours, we had to configure the start and end dates of the ICON simulation for a set of given input files and values. This tested our knowledge of how fast ICON could run on our system, with the parameters we chose. Set a simluation too short, and we waste precious minutes that could have allowed a longer simulation. Set a simulation too long, and the entire run is invalid, wasting 3+ hours.

0 commit comments

Comments
 (0)