Skip to content

Conversation

test-bai-cpu
Copy link

Now in trajnet++, all scenes with only the primary pedestrian in the scene, will be removed. When I add these scenes, in the training step, it will have an error:

return torch._C._nn.linear(input, weight, bias)
RuntimeError: mat1 and mat2 shapes cannot be multiplied (8x36 and 288x256)

Because in trajnetbaselines/lstm/gridbased_pooling.py, grid, generated by occupancy() , the shape of grid maybe wrong in single pedestrain's case. The grid shape is [num_tracks * batch_size, self.pooling_dim, self.n, self.n], like [32,2,12,12] but in code for only one pedestrian, the grid shape is [num_tracks, self.pooling_dim, self.n, self.n], like [1,2,12,12]. It should be [8,12,12,12] in this case.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant