DeepSense Forum
Queries regarding Google Collab
Hello,
We are from the HSC team participating in the Multi modal beam prediction challenge 2022. We had a few queries as listed below.
1) Our RAM on Google collab seems to be crashing everytime we are loading the LIDAR point cloud points to npy files since the dimension is too big. If you could give some suggestion on how to work with this huge data on google collab.
2) What are the procedures to claim the GPU from organizers to run the model.
3) We are not able to find the slack account for the challenge, so if you could send us the link or invite that would be helpful.
Regards,
Rashi
Hi Rashi,
Please find the answer to your questions below:
1) Our RAM on Google collab seems to be crashing everytime we are loading the LIDAR point cloud points to npy files since the dimension is too big. If you could give some suggestion on how to work with this huge data on google collab.
The individual LIDAR (.ply) files are only around ~1 MB. So, ideally, such memory issues shouldn't arise.
- Can you please provide more context on what exactly you are trying to do here?
- Are you facing the same problem if you load just one data sample? If not, the batch size might be an issue and something you should look into.
2) What are the procedures to claim the GPU from organizers to run the model?
Please find the steps below:
1. Fill out this form [ https://2ja3zj1n4vsz2sq9zh82y3wi-wpengine.netdna-ssl.com/wp-content/uploads/2021/06/ITU_Challenge_Compute_Platform_protocol_v1.pdf]
2. Send an email to AI5GChallenge@itu.int
Please note that GPU usage is for a limited time.
3) We are not able to find the slack account for the challenge, so if you could send us the link or invite that would be helpful.
For the Multi Modal Beam Prediction Challenge, please consider this forum instead of slack for posting your questions. You are also welcome to send an email with your question to the challenge organizers at competition@34.237.18.208
Regards,
Team DeepSense
Leave a reply