Skip to content

Commit 95a5f00

Browse files
sweinbachSamuel Weinbach
andauthored
Move hooked embeddings to cpu to avoid oom (EleutherAI#34)
Co-authored-by: Samuel Weinbach <samuel.weinbach@gmail.com>
1 parent c745f3d commit 95a5f00

1 file changed

Lines changed: 1 addition & 1 deletion

File tree

deepspeed/runtime/engine.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -242,7 +242,7 @@ def hook_fn(module, input, output):
242242
return
243243
else:
244244
key = module.__class__.__name__
245-
self.layer_outputs[key] = output
245+
self.layer_outputs[key] = [o.cpu() if torch.is_tensor(o) else o for o in output]
246246

247247
def get_all_layers(net):
248248
for name, layer in net._modules.items():

0 commit comments

Comments
 (0)