Skip to content

Conversation

@juyongjiang
Copy link
Contributor

@juyongjiang juyongjiang commented Sep 10, 2021

Hi there,

I am a fan of the RecBole framework. Considering the complexity of the RecBole framework, I provide an easy but feasible method to achieve multi-GPUs training.
The core implementation idea is that re-wrapping the internal data type of Interaction to the PyTorch Dataloader object. The more details in my pull request branch "fix_multi_gpus", please check it.

Note that it is just one of the promising ways to realize multi-GPUs training. Hoping this method can inspire you to come up with a more novelty method to do it.

To use multi-gpus training model (e.g. BERT4Rec), you just need to run the following command:

  1. Set the multi_gpus: True in your config.yaml files.
  2. $ python -m torch.distributed.launch --nproc_per_node=3 run_recbole.py --model=BERT4Rec --config_files recbole/properties/model/BERT4Rec.yaml

Best Regards,
John

@2017pxy
Copy link
Member

2017pxy commented Sep 10, 2021

@juyongjiang Hi, thanks for your PR and we will carefully check it.

@2017pxy 2017pxy self-requested a review September 10, 2021 12:01
batch_size=dataset.shape[0],
sampler=DistributedSampler(dataset, shuffle=False))
for data in dis_loader:
batch_data = data
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not understand this loop, it seems batch_data will be the last 'data' of dis_loader, could you please explain it?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And did you test your code in some datasets like ml-100k? Could you provide us the performance results of models? I want to know if the model performance will change a lot compared with single-GPU training.

Copy link
Contributor Author

@juyongjiang juyongjiang Sep 10, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi, Xingyu! Yeah, my pleasure! In our DataLoader class, I assign batch_size=dataset.shape[0] that means it extracts all data in current batch_size. So the length of dis_loader will be only one, i.e. like this for data in range(1).

https://github.com/juyongjiang/RecBole/blob/0d35771629f65a9a06ad7e66dd11bfbe06091971/recbole/trainer/trainer.py#L173-L180

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, of course! Please wait for a moment! I will provide a table to illustrate the performance compared with single GPU training.

Copy link
Contributor Author

@juyongjiang juyongjiang Sep 10, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@2017pxy Hi, Xingyu! I have got the experimental results. It seems that it doesn't decrease the performance a lot but significantly reduces the training time by about 3.78 times. BTW, I just run the experiment only one time. So I think this performance drift can be ignored. : )
Note that the original item means I got the result through running your original RecBole code. And multi-GPUs item result is produced by 3 multi-GPUs.
image

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@2017pxy Any further questions and or comments? Thanks in advance.

Copy link
Member

@2017pxy 2017pxy Nov 17, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @juyongjiang @hunkim, sorry for late reply.

Following your implementation, our team modified the trainer and made some tests. We find your implementation works well for model training. Thanks for your contribution!

However, since the time cost of run_recbole is mainly from model evaluation, we want to implement the multi-GPUs evaluation as well and release together with the multi-GPUs training. Unfortunately, we face some problems when we apply your implementations to evaluation since the data organization for evaluation is different. Thus, I am sorry to tell you that it still takes some time to release this new feature, and even this new feature might not be added in next version.

Thanks again for your implementation, and if you have any idea or suggestions about multi-GPUs evaluation, please let us know.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@2017pxy Okay, got it! Thanks for your reply. I will implement the multi-GPUs evaluation as well and pull a new request. : )

@hunkim
Copy link
Contributor

hunkim commented Sep 10, 2021

@juyongjiang Cool!

@KlaineWei
Copy link

Hello, I used your methods to implement multi-gpus on kagt, but after setting multi-gpus: True, this parameter doesn't seem to work as it isn't printed on the log. Is there any other setting that I have missed?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants